My overall goal of the scripts is to do a efficient ping of a /8 network. To do this I am running 32 scripts at the same time to scan a /13 each.
the basic of the script is
import ipaddress
import time
import fileinput
import pingpack
set = 0
mainset =0
while mainset < 8:
while set < 256:
net_addr = ("10.{}.{}.0/24".format(mainset,set))
ip_net = ipaddress.ip_network(net_addr)
all_hosts = list(ip_net.hosts())
f_out_file=('{}_Ping.txt'.format(mainset))
for i in range(len(all_hosts)):
output = pingpack.ping("{}".format(all_hosts[i]))
if output != None:
with open(f_out_file,'a',newline="\n") as file:
file.write("{}, Y\r\n".format(all_hosts[i]))
print ("{}".format("Subnet Complete"))
set = set + 1
set=0
The script it self works and runs and gives me a good output when ran by it self. The issue i am running into is when i get 32 of these running for each subnet they run for about 8 set loops before the python process locks up and stops writing.
The script I am using to start the 32 is as follows
from subprocess import Popen, PIPE
import time
i = 0
count=0
while i < 32:
process = Popen(['ping{}.py'.format(i),"{}".format(count)], stdout=PIPE, stderr=PIPE, shell=True)
print(count)
print(i)
i = i + 1
count=count+8
time.sleep(1)
In this case; Yes; I do have 32 duplicate scrips each with a 2 lines changed for the different /13 subents. It may be as effect as some; but it gets them started and running.
How would I go about finding the reason for the stop for these scripts?
Side note: Yes I know I can do this with something like NMAP or Angry IP Scanner; but they both take 90+ hours to scan a entire /8; I am trying to shorten this down to something that can be ran in a more reasonable timeframe.
Your first problem is that set is never set back to zero when you move on to the next mainset. Your second problem is that mainset never increments. Rather than a pair of obscure while-loops, why not:
for mainset in xrange(8):
for set in xrange(256):
Also, in range(len(all_hosts)) is a code smell (and it turns out that you never use i except to write all_hosts[i]). Why not:
for host in all_hosts:
Related
I am using Python 3.6.4 on a Windows 7 system (I have other systems like Win 10 and Android but this is my starting point).
INKEY$, for those not familiar with BASIC (pretty much any flavor), is a function that checks the keyboard buffer, returning that data as a string or a null/empty value ("") if there was no data, and clears the buffer. The length of the returned string depends on the data in the buffer, normally 0, 1, or 2 on a single keystroke (a fast typist could fill the small buffer between checks in the old days). The Enter key was not needed (unless that was what you were looking for) or processed and the program did not pause unless programmed to do so.
Pauser:
a$=""
while a$=""
a$=inkey$
wend
Flow Interrupter:
a=0
while a < 1000
a=a+1
print a
a$=inkey$
if a$<>"" then exit while
wend
Quick parser:
a$=inkey$
if a$<>"" then
rem process input
rem like arrow keys/a w s z for directional movement
rem useful for games and custom editors
end if
I am wanting to know if Python has a simple cross platform function (ie not 10+ lines of code unless in an importable module/class) that is the equivalent to the INKEY$ function? Also, I am not wanting to import the gaming module(s), just want an INKEY$ function equivalent (simple, straight forth, small).
import inkey
a = inkey.inkey()
Update #1:
After I installed the readchar module and corrected a reported error by Python (stdout.write(a) needs to be stdout.write(str(a)) as the variable 'a' appears to be returned as a byte string from readchar() function) when using the code listed by Mr. Stratton below, I only get continuous stream of b'\xff' and console echoed characters if there where any keypresses.
Stripping it down to use only the function doesn't help either:
from readchar import readchar
from sys import stdout
import os
#I like clear screens, and I can not lie
os.system('cls') # For Windows
#os.system('clear') # For Linux/OS X
def inkey():
"INKEY$ function"
return readchar()
#let the processing hit the floor, elsewhere
b=0
step = 1
while b < 1000:
b = b + step
print(b)
#convert bytes to integers
a = int.from_bytes(inkey(), "big")
#remember I keep getting b'\xff' (255) when buffer is empty
if chr(a) == "a":
step = step - 1
a = 255 #don't stop
if chr(a) == "l":
step = step + 1
a = 255 #don't stop
if a != 255:
break
It is supposed to count b from 0 to 999, stopping on almost any keypress, 'a' decreases the step, 'l' increases it. Instead, it prints the keypress either before or after the value of b depending on timing and continues until b = 1000. Nothing I did made a difference.
While the Pauser function can be replaced with an input() (i = input("Press Enter key to continue")) the other two variants can't be changed so easily it seems.
The closest to what you’re looking for is probably the readchar library.
Here’s an example that resembles the old BASIC logic:
from readchar import readchar
from sys import stdout
a = ' '
while ord(a) not in [3,24,27]:
a = readchar()
stdout.write(a)
stdout.flush()
if ord(a) == 13:
stdout.write("\n")
Those numbers that break the loop are CTRL-C, CTRL-X, and ESC respectively. The number 13 is the carriage return; the example writes a line feed following each carriage return to avoid overwriting text.
The equivalant to the old ASC(A$) in BASIC is ord(a) in Python. (And CHR$(A$) is chr(a).)
Note that this will block on reading. If no keypress is waiting, Python will stop on the line a = readchar() until a key is pressed. To get the full effect of BASIC’s INKEY$, you’ll need to verify that some data is waiting before reading it. You can do this using Python’s select library.
from readchar import readchar
from sys import stdin, stdout
from select import select
def inkey():
if select([stdin,],[],[],0.0)[0]:
return readchar()
return ''
a = ''
timer = 0
while a != 'Q':
a = inkey()
if a != '':
stdout.write(a)
stdout.flush()
if ord(a) == 13:
stdout.write("\n")
timer += 1
if timer > 1000000:
print("Type faster, human!")
timer = 0
This defines an inkey function and returns the empty string if nothing is waiting. If something is reading, it returns readchar(). Every 1,000,000 times through the loop, it tells the human on the other end to type faster.
In this version, it quits on a capital letter “Q”, as this does not block CTRL-C, CTRL-X, and ESC from breaking out of the program altogether.
This may have trouble if you’re using Windows, as select.select at least at one time did not work on Windows. I have no means of testing that, however.
You may also want to look at pynput if you don’t need it to look exactly like BASIC. The canonical means to do this in Python is probably to set up an event that calls a function or method. Your script goes on doing its thing, and if that function (or method) is invoked, it cancels or modifies the action of the main loop.
I am trying to test some programs by using Python. I want to see if given certain input they crash, end without errors, or run for longer than a timeout.
Ideally I would like to use subprocess as I am familiar with that. However am able to use any other useful library. I assume that reading core dump notifications is an option, however I do not yet know how to do that nor do I know if that is the most effective way.
Using os and What is the return value of os.system() in Python?, a solution could be:
status = os.system(cmd)
# status is a 16 bit number, which first 8 bits from left(lsb) talks about signal used by os to close the command, Next 8 bits talks about return code of command.
sig, ret = os.WIFSIGNALED(status), os.WEXITSTATUS(status)
# then check some usual problems:
if sig:
if status == 11: # SIGSEGV
print ('crashed by segfault')
elif status == 6 : # SIGABRT
print('was aborted')
else: # 14, 9 are related to timeouts if you like them
print('was stopped abnormally with', status)
else:
print('program finished properly')
I haven't checked yet if subprocess returns the same status.
I have two user defined python scripts. First takes a file and processes it, while the second script takes the output of first and runs an executable, and supplies the output of first script to program with additional formatting.
I need to run these scripts via another python script, which is my main executable script.
I searched a bit about this topic and;
I can use importlib to gather the content of scripts so that I can call them at appropriate times. This requires the scripts to be under my directory/or modification to path environment variable. So it is a bit ugly looking at best, not seem pythonish.
Built-in eval function. This requires the user to write a server-client like structure, cause the second script might have to run the said program more than one time while the first script still gives output.
I think I'm designing something wrong, but I cannot come up with a better approach.
A more detailed explenation(maybe gibberish)
I need to benchmark some programs, while doing so I have a standard form of data, and this data needs to be supplied to benchmark programs. The scripts are (due to nature of benchmark) special to each program, and needs to be bundled with benchmark definition, yet I need to create this program as a standalone configurable tester. I think, I have designed something wrong, and would love to hear the design approaches.
PS: I do not want to limit the user, and this is the reason why I choose to run python scripts.
I created a few test scripts to make sure this works.
The first one (count_01.py) sleeps for 100 seconds, then counts from 0 to 99 and sends it to count_01.output.
The second one (count_02.py) reads the output of first one (count_01.output) and adds 1 to each number and writes that to count_02.output.
The third script (chaining_programs.py) runs the first one and waits for it to finish before calling the second one.
# count_01.py --------------------
from time import sleep
sleep(100)
filename = "count_01.output"
file_write = open(filename,"w")
for i in range(100):
#print " i = " + str(i)
output_string = str(i)
file_write.write(output_string)
file_write.write("\n")
file_write.close()
# ---------------------------------
# count_02.py --------------------
file_in = "count_01.output"
file_out = "count_02.output"
file_read = open(file_in,"r")
file_write = open(file_out,"w")
for i in range(100):
line_in = file_read.next()
line_out = str(int(line_in) + 1)
file_write.write(line_out)
file_write.write("\n")
file_read.close()
file_write.close()
# ---------------------------------
# chaining_programs.py -------------------------------------------------------
import subprocess
import sys
#-----------------------------------------------------------------------------
path_python = 'C:\Python27\python.exe' # 'C:\\Python27\\python.exe'
#
# single slashes did not work
#program_to_run = 'C:\Users\aaaaa\workspace\Rich_Project_044_New_Snippets\source\count.py'
program_to_run_01 = 'C:\\Users\\aaaaa\\workspace\\Rich_Project_044_New_Snippets\\source\\count_01.py'
program_to_run_02 = 'C:\\Users\\aaaaa\\workspace\\Rich_Project_044_New_Snippets\\source\\count_02.py'
#-----------------------------------------------------------------------------
# waits
sys.pid = subprocess.call([path_python, program_to_run_01])
# does not wait
sys.pid = subprocess.Popen([path_python, program_to_run_02])
#-----------------------------------------------------------------------------
TL;DR:
The print() result is not updating in a Windows Console. Executes fine in IDLE. Program is executing even though Windows Console is not updating.
Background
I have a file, test.py that contains:
Edit: Included the conditions that I used to see if the Console was updating. Eventually the series of X values never prints again in Console and the Console never scrolls back up (as it normally does when output is being generated at the bottom).
count = 0
while True:
print ("True")
count += 1
if count == 10:
print ("XXXXXXXXX")
count = 0
When I run this in cmd.exe it obviously prints a very large number of True.
However, after about 25 seconds of running, it stops printing any more, though the program is still running and can be seen in the Task Manager.
I have a program with some progress indicators that end up stay at say 50% even though they are moving well beyond 50% simply because print() is not showing in the Console output.
Edit: The true use case problem.
The above code was just a test file to see if printing in Console stopped in all programs, not the one I was running. In practice, my program prints to Console and looks like:
line [10] >> Progress 05%
Where line [10] isn't real but I merely typed here to show you that print() sends to that line in the Console window. As my program continues it increments:
line [10] >> Progress 06%
line [10] >> Progress 11%
.
.
.
line [10] >> Progress 50%
Each time line [10] is overwritten. I use ANSI escape characters and colorama to move the Console cursor accordingly:
print('\x1b[1000D\x1b[1A')
This moves the cursor 1000 columns left and 1 row up (so the start of the previous line).
Something is happening where the print("Progress " + prog + "%") is not showing up anymore in Console because eventually the next bit of Python gets executed:
line [11] >> Program Complete...
I verified the resultants which get put into a folder. So the program continued to run while the Console did not update.
Edit: Here is the script running the updates to the stdout.
def check_queue(q, dates, dct):
out = 0
height = 0
# print the initial columns and rows of output
# each cell has a unique id
# so they are stored in a dictionary
# then I convert to list to print by subscripting
for x in range(0, len(list(dct.values())), 3):
print("\t\t".join(list(dct.values())[x:x+3]))
height +=1 # to determine where the top is for cursor
while True:
if out != (len(dates) * 2):
try:
status = q.get_nowait()
dct[status[1]] = status[2]
print('\x1b[1000D\x1b[' + str(height + 1) + 'A')
# since there was a message that means a value was updated
for x in range(0, len(list(dct.values())), 3):
print("\t\t".join(list(dct.values())[x:x+3]))
if status[0] == 'S' or 'C' or 'F':
out += 1
except queue.Empty:
pass
else:
break
In short, I pass a message to the queue from a thread. I then update a dictionary that holds unique cell IDs. I update the value, move the cursor in Console to the upper left position of the printed list, and print over it.
Question:
When using stdout, is there a limit to how many times you can print to it in a period of time?
That may well be an illusion (maybe because there's a maximum limit of lines in the console and new ones just replace the first ones then).
There's definetly no limit how much you can print. You could verify this with something that changes each iteration, for example a loop that counts the number of iterations:
import itertools
for i in itertools.count():
print(i, "True")
I cannot reproduce the problem in Windows 10 using 64-bit Python 3.6.2 and colorama 0.3.9. Here's the simple example that I tested:
import colorama
colorama.init()
def test(M=10, N=1000):
for x in range(M):
print('spam')
for n in range(N):
print('\x1b[1000D\x1b[' + str(M + 1) + 'A')
for m in range(M):
print('spam', m, n)
Each pass successfully overwrites the previous lines. Here's the final output from looping N (1000) times:
>>> test()
spam 0 999
spam 1 999
spam 2 999
spam 3 999
spam 4 999
spam 5 999
spam 6 999
spam 7 999
spam 8 999
spam 9 999
If this example fails for you, please update your question and include the versions of Windows, Python, and colorama that you're testing.
Sounds like it might be a system limitation, not a Python process issue? I've never run across a 'hang' related to print statements (or any built-in function), however you may want to look at mapping performance and memory usage:
High Memory Usage Using Python Multiprocessing
As far as how many times you can print in a period of time, that is almost exclusively based on the speed the system executes the code. You could run some benchmark tests (execution time / number of executions) across several platforms to test performance with specific system specs, but I'd say the likely cause of your issue is system / environment related.
SO,
The code in question is the following, however it can randomly happen on other scripts too (I don't think the error lies in the code)
For some reason, completely randomly it sometimes crashes and pops up that "pythonw.exe has stopped working" it could be after 5 hours, 24 hours or 5 days... I can't figure out why it's crashing.
from datetime import date, timedelta
from sched import scheduler
from time import time, sleep, strftime
import random
import traceback
s = scheduler(time, sleep)
random.seed()
def periodically(runtime, intsmall, intlarge, function):
currenttime = strftime('%H:%M:%S')
with open('eod.txt') as o:
eod = o.read().strip()
if eod == "1":
EOD_T = True
else:
EOD_T = False
while currenttime >= '23:40:00' and currenttime <= '23:59:59' or currenttime >= '00:00:00' and currenttime <= '11:30:00' or EOD_T:
if currenttime >= '23:50:00' and currenttime <= '23:59:59':
EOD_T = False
currenttime = strftime('%H:%M:%S')
print currenttime, "Idling..."
sleep(10)
open("tca.txt", 'w').close
open("tca.txt", 'w').close
runtime += random.randrange(intsmall, intlarge)
s.enter(runtime, 1, function, ())
s.run()
def execute_subscripts():
st = time()
print "Running..."
try:
with open('main.csv'):
CSVFile = True
except IOError:
CSVFile = False
with open('eod.txt') as eod:
eod = eod.read().strip()
if eod == "1":
EOD_T = True
else:
EOD_T = False
if CSVFile and not EOD_T:
errors = open('ERROR(S).txt', 'a')
try:
execfile("SUBSCRIPTS/test.py", {})
except Exception:
errors.write(traceback.format_exc() + '\n')
errors.write("\n\n")
errors.close()
print """ %.3f seconds""" % (time() - st)
while True:
periodically(15, -10, +50, execute_subscripts)
Does anyone know how I can find out why it's crashing or know why and know a way to fix it?
Thanks
- Hyflex
I don't know, but it may be related to the two lines that do this:
open("tca.txt", 'w').close
Those aren't doing what you intend them to do: they leave the file open. You need to call the close method (not merely retrieve it):
open("tca.txt", 'w').close()
^^
But that's probably not it. CPython will automatically close the file object when it becomes garbage (which happens immediately in this case - refcount hits 0 as soon as the statement ends).
Maybe you should move to a Linux system ;-)
Idea: would it be possible to run this with python.exe instead, from a DOS box (cmd.exe) you leave open and ignore? A huge problem with debugging pythonw.exe deaths is that there's no console window to show any error messages that may pop up.
Which leads to another question: what's this line doing?
print "Running..."
If you're running under pythonw.exe, you never see it, right? And that can cause problems, depending on exactly which versions of Python and Windows you're running. Standard input and standard output don't really exist, under pythonw, and I remember tracking down one mysterious pythonw.exe death to the Microsoft libraries blowing up when "too much" data was written to sys.stdout (which print uses).
One way to tell: if you run this under python.exe instead from a DOS box, and it runs for a year without crashing, that was probably the cause ;-)
Example
Here's a trivial program:
i = 0
while 1:
i += 1
with open("count.txt", "w") as f:
print >> f, i
print "hi!"
Using Python 2.7.6 under 32-bit Windows Vista, it has radically different behavior depending on whether python.exe or pythonw.exe is used to run it.
Under python.exe:
C:\Python27>python yyy.py
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
hi!
...
That goes on forever, and the value in count.txt keeps increasing. But:
C:\Python27>pythonw yyy.py
C:\Python27>
That is, there's no visible output. And that's expected: pythonw runs its program disconnected from the console window.
After a very short time, pythonw.exe silently dies (use Task Manager to see this) - vanishes without a trace. At that point:
C:\Python27>type count.txt
1025
So MS's libraries are still crapping out when "too much" is written to stdout from a disconnected program. Take out the print "hi!", and it runs "forever".
Python 3
This is "fixed" in Python 3, via the dubious expedient of binding sys.stdout to None under its pythonw.exe. You can read the history of this mess here.