I'm trying to start another script in python and then give an answer to input, this is the main script:
import subprocess
import sys
import platform
cmdline = ['py', 'ciao.py']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
for line in cmd.stdout:
if line == b'Loading...\r\n':
print("sending data...")
cmd.communicate(b"test\n")[0]
print("done")
print(line)
print(line)
And this is ciao.py:
import os
import re
import time
print("Loading...")
ciao = input("> ")
print(ciao)
os.system("mkdir okfunziona")
print("Done")
while 1:
time.sleep(10)
The main script manages to send "test" but then hangs and does not print "done" to the console.
The problem is both on windows and on linux.
---------------------------------------------------------------EDIT--------------------------------------------------------------
Ok i have tested Ashish Nitin Patil's example but i see b'Loading...\r\n' output, and I do not see the other outputs of the secondary script, like ">" or "Done", it seems that the "cmd.stdout.readline ()" works only the first time because the script does not end.
See this answer (and others on that question) for inspiration. For your case, you should not be using communicate, instead use stdin.write and stdout.readline.
Your main script might look like below -
while True:
line = cmd.stdout.readline()
print(line)
if line.strip() == b'Loading...':
print("sending data...")
cmd.stdin.write(b"test\n")
cmd.stdin.close()
print("done")
elif line.strip() == b'Done':
break
The outputs -
b'Loading...\n'
sending data...
5
done
b'> test\n'
b'Done\n'
Related
I have the following problem
import os
import json
import wmi
from random import choice
import time
filename = "kill.json"
with open(filename) as file:
kill = json.load(file)
def taskKill(imageNames: list):
cmdPrefix = 'taskkill /F /IM '
for imageName in imageNames:
cmd = cmdPrefix + imageName
os.system(cmd)
while 1==1:
c=wmi.WMI()
def check_process_running(rty):
if(c.Win32_Process(name=rty)):
print("Process is running")
taskKill(kill)
return
else:
print("Process is not running")
StrA =choice(kill)
check_process_running(StrA)
In this code that detects if the program is open and closes it, no matter how I change it, it always says Process is not running.
The output of your script is depending on the line if(c.Win32_Process(name=rty)) - it seems the return of Win32_Process is always True.
Insert a print statement with the value of Win32_Process before this line
Have you tried to provide the argument as a String ("StrA" instead of StrA)?
To check all current running processes, use:
import os, wmi
c = wmi.WMI()
for process in c.Win32_Process(name="python.exe"):
print(process.ProcessId, process.Name)
print("current processId:", os.getpid())
I am running a script that iterates through a text file. On each line on the text file there is an ip adress. The script grabs the banner, then writes the ip + banner on another file.
The problem is, it just stops around 500 lines, more or less, with no error.
Another weird thing is if i run it with python3 it does what i said above. If i run it with python it iterates through those 500 lines, then starts at the beggining. I noticed this when i saw repetitions in my output file. Anyway here is the code, maybe you guys can tell me what im doing wrong:
import os
import subprocess
import concurrent.futures
#import time, random
import threading
import multiprocessing
with open("ipuri666.txt") as f:
def multiprocessing_func():
try:
line2 = line.rstrip('\r\n')
a = subprocess.Popen(["curl", "-I", line2, "--connect-timeout", "1", "--max-time", "1"], stdout=subprocess.PIPE)
b = subprocess.Popen(["grep", "Server"], stdin=a.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#a.stdout.close()
out, err = b.communicate()
g = open("IP_BANNER2","a")
print( "out: {0}".format(out))
g.write(line2 + " " + "out: {0}\n".format(out))
print("err: {0}".format(err))
except IOError:
print("Connection timed out")
if __name__ == '__main__':
#starttime = time.time()
processes = []
for line in f:
p = multiprocessing.Process(target=multiprocessing_func, args=())
processes.append(p)
p.start()
for process in processes:
process.join()
If your use case allows I would recommend just rewriting this as a shell script, there is no need to use Python. (This would likely solve your issue indirectly.)
#!/usr/bin/env bash
readarray -t ips < ipuri666.txt
for ip in ${ips[#]}; do
output=$(curl -I "$ip" --connect-timeout 1 --max-time 1 | grep "Server")
echo "$ip $output" >> fisier.txt
done
The script is slightly simpler than what you are trying to do, for instance I do not capture the error. This should be pretty close to what you are trying to accomplish. I will update again if needed.
I'm invoking another program from the command line to create visual studio solutions and build them. This program outputs the results of those commands.
I want to print warning lines that are output in yellow text rather than the default grey and error lines in red.
Let's assume that my cmd.exe console has already been modified to support rendering ascii2 escape codes to color output.
I've done quite a bit of searching for solutions, but most of the things I've found are made for linux/osx. I did find a script that given regex as input, could replace text using the specified rules.
regex script
Is it possible for me to run this script in the background, but still connected to the cmd.exe, such that it will run on all the text that is output to the cmd.exe, to run the regex search and replace before the text is displayed in the cmd.exe window? I could put this into a batch file or python script.
I wanted to lay out the specific application, but to make this question potentially more generic, how do I apply an existing script/program to a running cmd.exe prompt in the background, such that the user can still run commands in the cmd prompt, but have the background program apply to the commands run by the user?
I'm open to trying powershell if there are no other performant viable solutions that exist.
The regular expression to detect if a line is an error just searches for the word error
"\berror\b"
It's the same search for a warning
"\bwarning\b"
Edit: Adding the better solution first. This solution sets up a Pipe so it can receive the output from the external program, then prints the colorized result in realtime.
#Python 2
from subprocess import Popen, PIPE
def invoke(command):
process = Popen(command, stdout=PIPE, bufsize=1)
with process.stdout:
#b'' is byte. Per various other SO posts, we use this method to
#iterate to workaround bugs in Python 2
for line in iter(process.stdout.readline, b''):
line = line.rstrip()
if not line:
continue
line = line.decode()
if "error" in line:
print (bcolors.FAIL + line + bcolors.ENDC)
elif "warning" in line:
print (bcolors.WARNING + line + bcolors.ENDC)
else:
print (line)
error_code = process.wait()
return error_code
To accomplish this, I pipped the output of the build command to a file. I then wrote this python script to install a required dependency, loop through the file contents, then print the data with appropriate coloring.
I will now look into a solution which colors the output in real time, as this solution requires the user to wait for the build to complete before seeing the colored output.
#Python 2
import pip
def install(package):
if hasattr(pip, 'main'):
pip.main(['install', package])
else:
pip._internal.main(['install', package])
class bcolors:
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
def print_text():
install('colorama')
try:
import colorama
colorama.init()
except:
print ("could not import colorama")
if len(sys.argv) != 2:
print ("usage: python pretty_print \"file_name\"")
return 0
else:
file_name = sys.argv[1]
with open(sys.argv[1], "r") as readfile:
for line in readfile:
line = line.rstrip()
if not line:
continue
if "error" in line:
print (bcolors.FAIL + line + bcolors.ENDC)
elif "warning" in line:
print (bcolors.WARNING + line + bcolors.ENDC)
else:
print (line)
return 0
if __name__ == "__main__":
ret = print_text()
sys.exit(ret)
I have written this demo script to ask my question on subprocess.call().
I am trying to run python test scripts one after another. However in this scenario when one of the test aborts due to invalid test condition, I want to terminate subprocess.call(). and move on to next test script. I have read through other queries but couldn't find sufficient explanation. Appreciate any suggestion or help in this matter. Below are demo files.
File1: listscripts.py -> this file list all tests from a folder and runs them using subprocess.call()
import os
from subprocess import *
import sys,os,time
Lib_Path = "C:\\Demo\\question"
sys.path.append(Lib_Path)
import globalsvar # has a global variable
list = os.listdir("C:\\Demo\\question\\scripts") # this has 3 example basic script
for testscripts in list:
aborttest = globalsvar.aborttestcall # when it encounters invalid condition from testscript1thru3 call() should terminate and go to next test
while not aborttest:
Desttestresultpath = os.path.join('C:/Demo/question/scripts',pyscripts)
call(["python",Desttestresultpath]) #calls individual scripts
aborttest = True
exit(1)
File2: globalsvar.py ( aborttestcall = False )
testscript1.py, testscript2.py and testscript3.py -> has some print statments placed in C:/Demo/question/scripts
testscript1.py and testscript3.py:
import sys,os,time
Lib_Path = "C:\\Demo\\question"
sys.path.append(Lib_Path)
import globalsvar
print "Start of Test\n"
print "testing stdout prints --1"
time.sleep(1)
globalsvar.aborttestcall = False
print "testing stdout prints --2"
time.sleep(1)
print "testing stdout prints --3"
time.sleep(1)
testscript2.py:
import sys,os,time
Lib_Path = "C:\\Demo\\question"
sys.path.append(Lib_Path)
import globalsvar
print "Start of Test\n"
print "testing stdout prints --1"
time.sleep(1)
globalsvar.aborttestcall = True
print "testing stdout prints --2"
time.sleep(1)
print "testing stdout prints --3"
time.sleep(1)
You can run your scripts (among different possibilities) like this:
import subprocess
import os
for file_item in os.listdir('scripts'):
script = os.sep.join(['scripts', file_item])
return_value = subprocess.call(['python', script])
print "OUTPUT: " + str(return_value)
while your inner scripts can exit their process with an exit code that you can evaluate on your calling process.
import time
import sys
print "Doing subprocess testing stuff"
time.sleep(2)
print "Doing more subprocess testing stuff"
# simulate error
time.sleep(2)
print "Error, jump out of the process"
sys.exit(1)
# finish
time.sleep(2)
print "done"
# this can be left out since it is called implicitely
# on successful step out of a process
# sys.exit(0)
I have a script that I want to run from within Python (2.6.5) that follows the logic below:
Prompts the user for a password. It looks like ("Enter password: ") (*Note: Input does not echo to screen)
Output irrelevant information
Prompt the user for a response ("Blah Blah filename.txt blah blah (Y/N)?: ")
The last prompt line contains text which I need to parse (filename.txt). The response provided doesn't matter (the program could actually exit here without providing one, as long as I can parse the line).
My requirements are somewhat similar to Wrapping an interactive command line application in a Python script, but the responses there seem a bit confusing, and mine still hangs even when the OP mentions that it doesn't for him.
Through looking around, I've come to the conclusion that subprocess is the best way of doing this, but I'm having a few issues. Here is my Popen line:
p = subprocess.Popen("cmd", shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
When I call a read() or readline() on stdout, the prompt is printer to the screen and it hangs.
If I call a write("password\n") for stdin, the prompt is written to the screen and it hangs. The text in write() is not written (I don't the cursor move the a new line).
If I call p.communicate("password\n"), same behavior as write()
I was looking for a few ideas here on the best way to input to stdin and possibly how to parse the last line in the output if your feeling generous, though I could probably figure that out eventually.
If you are communicating with a program that subprocess spawns, you should check out A non-blocking read on a subprocess.PIPE in Python. I had a similar problem with my application and found using queues to be the best way to do ongoing communication with a subprocess.
As for getting values from the user, you can always use the raw_input() builtin to get responses, and for passwords, try using the getpass module to get non-echoing passwords from your user. You can then parse those responses and write them to your subprocess' stdin.
I ended up doing something akin to the following:
import sys
import subprocess
from threading import Thread
try:
from Queue import Queue, Empty
except ImportError:
from queue import Queue, Empty # Python 3.x
def enqueue_output(out, queue):
for line in iter(out.readline, b''):
queue.put(line)
out.close()
def getOutput(outQueue):
outStr = ''
try:
while True: # Adds output from the Queue until it is empty
outStr+=outQueue.get_nowait()
except Empty:
return outStr
p = subprocess.Popen("cmd", stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False, universal_newlines=True)
outQueue = Queue()
errQueue = Queue()
outThread = Thread(target=enqueue_output, args=(p.stdout, outQueue))
errThread = Thread(target=enqueue_output, args=(p.stderr, errQueue))
outThread.daemon = True
errThread.daemon = True
outThread.start()
errThread.start()
try:
someInput = raw_input("Input: ")
except NameError:
someInput = input("Input: ")
p.stdin.write(someInput)
errors = getOutput(errQueue)
output = getOutput(outQueue)
Once you have the queues made and the threads started, you can loop through getting input from the user, getting errors and output from the process, and processing and displaying them to the user.
Using threading it might be slightly overkill for simple tasks.
Instead os.spawnvpe can be used. It will spawn script shell as a process. You will be able to communicate interactively with the script.
In this example I passed password as an argument, obviously that is not a good idea.
import os
import sys
from getpass import unix_getpass
def cmd(cmd):
cmd = cmd.split()
code = os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
if code == 127:
sys.stderr.write('{0}: command not found\n'.format(cmd[0]))
return code
password = unix_getpass('Password: ')
cmd_run = './run.sh --password {0}'.format(password)
cmd(cmd_run)
pattern = raw_input('Pattern: ')
lines = []
with open('filename.txt', 'r') as fd:
for line in fd:
if pattern in line:
lines.append(line)
# manipulate lines
If you just want a user to enter a password without it being echoed to the screen just use the standard library's getpass module:
import getpass
print("You entered:", getpass.getpass())
NOTE:The prompt for this function defaults to "Password: " also this will only work on command lines where echoing can be controlled. So if it doesn't work try running it from terminal.