I have a python project called the "Remote Dongle Reader". There are about 200 machines that have a "Dongle" attached, and a corresponding .exe called "Dongle Manager". Running the Dongle Manager spits out a "Scan" .txt file with information from the dongle.
I am trying to write a script, which runs from a central location, which has administrative domain access to the entire network. It will read a list of hostnames, go through each one, and bring back all the files. Once it brings back all the files, it will compile to a csv.
I have it working on my Lab/Test servers, but in production systems, it does nto work. I am wondering if this is some sort of login issue since people may be actively using the system. THe process needs to launch silently, and do everything int he background. However since I am connecting to the administrator user, I wonder if there is a clash.
I am not sure what's going on other than tge application works up until the point I expect the file to be there. The "Dongle Manager" process starts, but it doesnt appear to be spitting the scan out on any machine not logged in as administrator (the account I am running off of).
Below is the snippet of the WMI section of the code. This was a very quick script so I apoliogize for any non pythonic statements.
c = wmi.WMI(ip, user=username, password=password)
process_startup = c.Win32_ProcessStartup.new()
process_startup.ShowWindow = SW_SHOWNORMAL
cmd = r'C:\Program Files\Avid\Utilities\DongleManager\DongleManager.exe'
process_id, result = c.Win32_Process.Create(CommandLine=cmd,
ProcessStartupInformation=process_startup)
if result == 0:
print("Process started successfully: %d" % process_id)
else:
print("Problem creating process: %d" % result)
while not os.path.exists(("A:/"+scan_folder)):
time.sleep(1)
counter += 1
if counter > 20:
failed.append(hostname)
print("A:/"+scan_folder+"does not exist")
return
time.sleep(4)
scan_list = os.listdir("A:/"+scan_folder)
scan_list.sort(key=lambda x: os.stat(os.path.join("A:/"+scan_folder, x)).st_mtime, reverse=True)
if scan_list is []:
failed.append(hostname)
return
recursive_overwrite("A:/"+scan_folder+"/"+scan_list[0],
"C:\\AvidTemp\\Dongles\\"+hostname+".txt")
Assuming I get a connection (computer on), it usually fails at the point where it either waits for teh folder to be created, or expects something in the list of scan_folder... either way, something is stopping the scan from being created, even though the process is starting
Edit, I am mounting as A:/ elsewhere in the code
The problem is that you've requested to show the application window but there is no logged on desktop to display it. WMI examples frequently use SW_SHOWWINDOW but that's usually the wrong choice because with WMI you are typically trying to run something in the background. In that case, SW_HIDE (or nothing) is the better choice.
Related
Here's a summary of my setup:
3 axis CNC, controllable via a python script running on raspberry pi
Windows PC can connect to the pi can run a script
The end goal is for a UI made in C# to initiate an automated test cycle for the CNC to run in. In the python program there is a Cnc object that stores the devices current position and contains methods to position it to a certain place.
The problem is if i run a new script every time i want to move the CNC, I have to re initialize the Cnc instance and it will forget it's position. So I'm wondering if I can have one master program running that contains the one and only Cnc instance, then when the remote machine wants to tell the CNC to move it can run a different script with argz for the new position python action.py x y z. This script could then communicate to the master program to call the move method to the appropriate location without ever having to re-construct a new Cnc object.
Then ideally the master program would indicate when the motion is completed and send back a message to the "action" script, that script would output something to tell the remote system that the action is completed, then it would exit, ready to be called again with new argz.
In the end the remote system is highly abstracted from any of the working and just needs to start running the master, then run the move script with argz any time it wants to perform a motion.
Note:
My other idea was to just save a text file with the current position then always re initialize the instance with the info in the file.
EDIT: SOLVED... sort of
handler.py
The handler will continuously read from a text file named input.txt looking for a new integer. If received it will update a text file named output.txt to read '0', then do some action with the input (i.e move a cnc) then write the value '1' to output.txt.
from time import sleep
cur_pos=0
while True:
with open("input.txt","r") as f:
pos=f.readline()
try:
if pos=='':
pass
else:
pos=int(pos)
except:
print pos
print("exititng...")
exit()
if cur_pos==pos or pos=='':
#suggestion from #Todd W to sleep before next read
sleep(0.3)
pass
else:
print("Current pos: {0:d}, New pos: {1:d}".format(cur_pos,pos))
print("Updating...")
with open("output.txt","w") as f:
f.write("0")
#do some computation with the data
sleep(2)
cur_pos=pos
print("Current pos: {0:d}".format(cur_pos))
with open("output.txt","w") as f:
f.write("1")
pass_action.py
The action passer will accept a command line argument, write it to input.txt, then wait for output.txt to read '1', after which it will print done and exit.
import sys
from time import sleep
newpos=sys.argv[1]
with open("input.txt","w") as f:
f.write(newpos)
while True:
sleep(0.1)
with open("output.txt","r") as f:
if f.readline()=='1':
break
sys.stdout.write("done")
One possible approach might just be to make your main python script a webapp using something like Flask or Bottle. Your app initializes the cnc, then waits for HTTP input, maybe on an endpoint like 'move'. Your C# app then just sends a REST request (HTTP) to move like {'coordinates': [10,15]} and your app acts on it.
If you really want to be dead simple, have your "main" CNC script read a designated directory on the file system looking for a text file that has one or more commands. If multiple files are there, take the earliest file and execute the command(s). Then delete the file (or move it to another directory) and get the next file. If there's no file, sleep for a few seconds and check again. Repeat ad nauseam. Then your C# app just has to write a command file to the correct directory.
Your better bet is to combine gevent with GIPC https://gehrcke.de/gipc/
This allows for asynchronous calls to a stack, and communication between separate processes.
User input to my Python program does not respond most of the time when I run it remotely and try to use a PhantomJS webdriver. Everything executes fine, same as it does when I run it locally, except the majority of keypresses don't register at a prompt from raw_input(). I have to hit a key (on average) three times for it to actually show up in the console.
This is only on raw_input in my program after it creates a PhantomJS instance. I singled out the issue by setting w = None, where w is the webdriver. After doing this, user input continued working as it should throughout the program.
I can start another ssh session and stdin works fine. I can also start another instance of my program (python run.py) and before that instance creates a webdriver, raw_input works fine in it as well. I have tried running top in another ssh session and there is no load at all on my server. So only input to the parent python process is affected.
I figure best case scenario, someone can help me get to the root of the problem. I don't have the tools to narrow down the cause of this. Thanks!
Additional info that might be relevant
ps -aux in an ssh session gives me these related processes:
[TTY: pts/1, STAT: S+] python run.py
[TTY: pts/1, STAT: S1+] node /usr/local/bin/phantomjs --webdriver=52129
[TTY: pts/1, STAT: S1+] /usr/local/lib/node_modules/phantomjs/lib/phantom/bin/phantomjs --webdriver=52129
All of them using negligible mem and cpu
Selenium webdriver creation is customized- (summarized) customdriver.py:
def mkDriver(params):
# params aren't relevant because this is the setup I always use on the server
w = webdriver.PhantomJS({'phantomjs.page.settings.loadImages': 'false'})
loginToFacebook(w)
loginToTwitter(w)
# for later- w.quit() leaves phantom process running
w.phantomjsPID = collectPhantomjsPID()
...Though I don't think the problem relates to any of the other statements there, this function finishes execution and my program continues.
Some notes:
laptop: OS X El Capitan, 2.4GHz intel core i5, 8GB memory
server: Ubuntu Server 14.04, 2.0GHz intel xeon e5, 8GB memory - I was using a micro .ec2 I set up myself, but someone set this up for me to give me something more powerful to play with. It's running on IBM Softlayer.
Before I asked: read about stdin and stdout over ssh, about paramiko as a solution (wouldn't give the behavior I wanted), checked if there is a better python function to use than raw_input, differences with psuedo-tty on and off upon ssh (don't think this applies to how I am running my program), and tried using the -u flag with python.
Initially all raw_input() statements gave an EOFError when I ran my program on the server. I "fixed" this using this solution I found, which was to import readline in all modules using raw_input. No idea why it worked. I know raw_input() is derived from sys.stdin.readline...
If you're looking at the pseudocode below, the purpose of this was to make changes to the ui and the functions it calls, then continue to test, without ever having to regenerate anything in resources (most importantly the webdriver, which takes a long time to load). The structure also makes it easy to have a few pre-written tests.
Overview of the modules...
run.py:
params = raw_input('enter parameters')
w = customdriver.mkDriver(params)
t = tweepy_connection()
f = facebook_Graph_API_conection()
s = sqlAlchemy_engine()
resources = [w, t, f, s]
while True:
choice = raw_input('1: break, 2: restart ui')
if choice == '1':
break
elif choice == '2':
reload(ui.py)
ui.ui_function(resources)
else:
continue
ui.py:
def ui_function(resources):
while True:
choice = raw_input('1: break, 2: functionA, 3: functionB')
if choice == '1':
break
elif choice == '2':
reload(file1)
params = raw_input('enter parameters')
file1.functionA(resources, params)
elif choice == '3':
reload(file2)
params = raw_input('enter some other parameters')
file2.functionB(resources, params)
else:
continue
Do the raw inputs hang when you only create a PhantomJS instance or do you do any headless browsing with it? If you're browsing, it would make sense that raw_inputs hang as each website request will consequently stall the shell.
Have you considered multi-threading it?
Here is the goal: a parser that reunites some information from some domains and organize them into one place.
I am a newbie with Python, having chosen to do this job with this language because of learning curve and things.
For the matter, I am doing the parsing with BeautifulSoup lib and that works like a charm. The routine is triggered via crontab in a CentOS 6, Python 2.7.
However, one of my parsing scripts sent me a log with memory error, what was causing the py file to quit without complete its job. Google here and there and found out that some very long html Python parsing would be doing my server ran out of memory. It would be better close, decompose and even garbage collect everything script would not be using there anymore.
Implemented the three things, no more memory errors in the crontab task. However, every time the script runs, I receive an email from crontab with the log of the parsing, what means that something went wrong there. Checking the databank, all the information was recorded alright, script also completed the entire task, still some error occurred, or crontab would not email me with a log.
In fact, when I run the script directly at the terminal on server, same occurs: the script won’t conclude, unless I ctrl+c it, it will be frozen in the screen. However, again, looking at the bank, all the tasks where completed without a error.
I tried work only with gc, tried only close() and only release(). Any of these three resources would freeze the screen/generate a log error (however without a error explicitaly in it).
Here is a simple version of what I am doing to better understanding:\
class GrabCategories():
def __init__(self):
target = 'http://provider-site.com/info.html'
try:
page = urllib2.urlopen(target)
if page.getcode() == 404:
print 'Page not found', target
return False
soup = BeautifulSoup(page.read())
page.close() #not using this anymore, may I close it?
except:
print 'Could not open', target
return
content = soup.find('div', {'id': 'box-content'})
soup.decompose() #not using this anymore, may I decompose it?
c=0
for link in content.findAll('a'):
#define some vars
try:
catPage = urllib2.urlopen(link['a'])
if catPage.getcode() == 404:
print 'Page not found', catPage
return False
catSoup = BeautifulSoup(catPage.read())
catPage.close() #not using this anymore, may I close it?
except:
print 'Could no open', target
continue
#do some things with the page content etc
catSoup.decompose() #not using this anymore, may I decompose it?
if(c%10):
gc.collect()
c=c+1
I have a script runReports.py that is executed every night. Suppose for some reason the script takes too long to execute, I want to be able to stop it from terminal by issuing a command like ./runReports.py stop.
I tried to implement this by having the script to create a temporary file when the stop command is issued.
The script checks for existence of this file before running each report.
If the file is there the script stops executing, else it continues.
But I am not able to find a way to make the issuer of the stop command aware that the script has stopped successfully. Something along the following lines:
$ ./runReports.py stop
Stopping runReports...
runReports.py stopped successfully.
How to achieve this?
For example if your script runs in loop, you can catch signal http://en.wikipedia.org/wiki/Unix_signal and terminate process:
import signal
class SimpleReport(BaseReport):
def __init__(self):
...
is_running = True
def _signal_handler(self, signum, frame):
is_running = False
def run(self):
signal.signal(signal.SIGUSR1, self._signal_handler) # set signal handler
...
while is_running:
print("Preparing report")
print("Exiting ...")
To terminate process just call kill -SIGUSR1 procId
You want to achieve inter process communication. You should first explore the different ways to do that : system V IPC (memory, very versatile, possibly baffling API), sockets (including unix domain sockets)(memory, more limited, clean API), file system (persistent on disk, almost architecture independent), and choose yours.
As you are asking about files, there are still two ways to communicate using files : either using file content (feature rich, harder to implement), or simply file presence. But the problem using files, is that is a program terminates because of an error, it may not be able to write its ended status on the disk.
IMHO, you should clearly define what are your requirements before choosing file system based communication (testing the end of a program is not really what it is best at) unless you also need architecture independence.
To directly answer your question, the only reliable way to know if a program has ended if you use file system communication is to browse the list of currently active processes, and the simplest way is IMHO to use ps -e in a subprocess.
Instead of having a temporary file, you could have a permanent file(config.txt) that has some tags in it and check if the tag 'running = True'.
To achieve this is quiet simple, if your code has a loop in it (I imagine it does), just make a function/method that branches a check condition on this file.
def continue_running():
with open("config.txt") as f:
for line in f:
tag, condition = line.split(" = ")
if tag == "running" and condition == "True":
return True
return False
In your script you will do this:
while True: # or your terminal condition
if continue_running():
# your regular code goes here
else:
break
So all you have to do to stop the loop in the script is change the 'running' to anything but "True".
I wrote a script in python that takes a few files, runs a few tests and counts the number of total_bugs while writing new files with information for each (bugs+more).
To take a couple files from current working directory:
myscript.py -i input_name1 input_name2
When that job is done, I'd like the script to 'return total_bugs' but I'm not sure on the best way to implement this.
Currently, the script prints stuff like:
[working directory]
[files being opened]
[completed work for file a + num_of_bugs_for_a]
[completed work for file b + num_of_bugs_for_b]
...
[work complete]
A bit of help (notes/tips/code examples) could be helpful here.
Btw, this needs to work for windows and unix.
If you want your script to return values, just do return [1,2,3] from a function wrapping your code but then you'd have to import your script from another script to even have any use for that information:
Return values (from a wrapping-function)
(again, this would have to be run by a separate Python script and be imported in order to even do any good):
import ...
def main():
# calculate stuff
return [1,2,3]
Exit codes as indicators
(This is generally just good for when you want to indicate to a governor what went wrong or simply the number of bugs/rows counted or w/e. Normally 0 is a good exit and >=1 is a bad exit but you could inter-prate them in any way you want to get data out of it)
import sys
# calculate and stuff
sys.exit(100)
And exit with a specific exit code depending on what you want that to tell your governor.
I used exit codes when running script by a scheduling and monitoring environment to indicate what has happened.
(os._exit(100) also works, and is a bit more forceful)
Stdout as your relay
If not you'd have to use stdout to communicate with the outside world (like you've described).
But that's generally a bad idea unless it's a parser executing your script and can catch whatever it is you're reporting to.
import sys
# calculate stuff
sys.stdout.write('Bugs: 5|Other: 10\n')
sys.stdout.flush()
sys.exit(0)
Are you running your script in a controlled scheduling environment then exit codes are the best way to go.
Files as conveyors
There's also the option to simply write information to a file, and store the result there.
# calculate
with open('finish.txt', 'wb') as fh:
fh.write(str(5)+'\n')
And pick up the value/result from there. You could even do it in a CSV format for others to read simplistically.
Sockets as conveyors
If none of the above work, you can also use network sockets locally *(unix sockets is a great way on nix systems). These are a bit more intricate and deserve their own post/answer. But editing to add it here as it's a good option to communicate between processes. Especially if they should run multiple tasks and return values.