WebApp becomes unresponsive after initiating Remote desktop Connection - python

My becomes unresponsive after creating a batch file and calling mstc to execute a remote desktop connection. I would have thought that this is an independent process and does not rely in any way to my python scrypt.
import os
def rdp_session(server, user, temporary_pass):
"""create Batch file to create .bat file that initiates rdp with variables"""
rdp = open("rdp_test.bat", "w")
rdp.write("cmdkey /generic:TERMSRV/"+server+" /user:"+user+" /pass:"+temporary_pass+"\n")
rdp.write("mstsc /v:"+server+" /admin")
rdp.close()
os.system("rdp_test.bat")
#os.remove("rdp_test.bat") optional, to delete file with creds after executing
I also tried using:
subprocess.call("rdp_test.bat")
subprocess.Popen(["rdp_test.bat"]) #doesnt initiate my rdp
I get the same result.
Why does this happen and what can I do so my stays responsive while my RDP runs?
To add a bit of context, I have this function within a Flask App, which I use to remote connect to different machines. when 1 rdp, the web app does not respond to any commands, and when I terminate my rdp, everything I clicked on suddenly executes.

In order for your session to continue you need to spawn another process, independent of the one that will terminate immediately after executing your script.

After reading a bit on subprocesses, I managed find that none of these options were immediately effective since I needed to not only run a subprocess with Popen but additionally needed to use Pathname expansion
from which I ended up doing:
subprocess.Popen([os.path.expanduser("My_File.bat")])
expanduser will expand a pathname that uses ~ to represent the current
user's home directory. This works on any platform where users have a
home directory, like Windows, UNIX, and Mac OS X; it has no effect on
Mac OS.
Otherwise my app would run all subsequent commands after closing my rdp session. This allows me to run multiple sub-processes independently from my web app and allows it to be responsive at the same time

Related

Running processes in OS X, Find the initiator process

I'd like to create a daemon (base on script or some lower level language) that calculates statistics on all opened applications according to their initiating process. The problem is that the initiating process does not always equivalent to the actual parent process.
For instance, When I press an hyperlink from Microsoft Word that should open executable file like file:///Applications/Chess.app/
In the case above, I've observed that the ppid of 'Chess' is in fact 'launchd', just the same as if I was running it from launchpad.
Perhaps there's a mach_port (or any other) api to figure out who really initiated the application ?
You can't. Mac OS X does not keep track of this information in the way you're looking for -- opening an application from another application does not establish a relationship of any sort between those applications.

How to make a python program open itself as text in safari?

I was trying to make a program in Python that would use os.system to open a file in Safari. In this case, I was trying to have it open a text copy of itself. The files name is foo.py.
import os, socket
os.system("python -m SimpleHTTPServer 4000")
IP = socket.gethostbyname(socket.gethostname())
osCommand = "open -a safari http://"+IP+":4000/foo.py"
os.system(osCommand)
system runs a program, then waits for it to finish, before returning.
So it won't get to the next line of your code until the server has finished serving. Which will never happen. (Well, you can hit ^C, and then it will stop serving—but then when you get to the next line that opens safari, it'll have no server to connect to anymore.)
This is one of the many reasons the docs for system basically tell you not to use it:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.
For example:
import subprocess, socket
server = subprocess.Popen(['python', '-m', 'SimpleHTTPServer', '4000'])
IP = socket.gethostbyname(socket.gethostname())
safari = subprocess.Popen(['open', '-a', 'safari', 'http://'+IP+':4000/foo.py'])
server.wait()
safari.wait()
That will start both programs in the background, and then wait for both to finish, instead of starting one, waiting for it to finish, starting the other, and waiting for it to finish.
All that being said, this is kind of a silly way to do what you want. What's wrong with just opening a file URL (like 'file:///{}'.format(os.path.abspath(sys.argv[0]))) in Safari? Or in the default web browser (which would presumably be Safari for you, but would also work on other platform, and for Mac users who used Chrome or Firefox, and so on) by using webbrowser.open on that URL?

How to call subprocess on network machine with its environment

Say I have the following open_calc.py script on a machine that's on the network:
import subprocess
subprocess.call(['calc.exe'])
This simple script merely opens the Windows calculator and ends. Then, on a local machine, I have invoke_network_calc.py script that contains:
import imp
lecroyModule = imp.load_source('module', r'\\network_machine\c$\open_calc.py')
If I run invoke_network_calc.py from the local machine, the calulator opens on the local computer. My question is, is there a way to do this so that the calculator is open on the network computer? Or, more generally, is there somthing that can be done so that from that point on, when subprocess.call is called, it starts the process in a different environment? I know this can be done if I simply invoke the network's Python engine with open_calc.py, but is there a way to do this without starting a separate Python process?
EDIT: Perhaps a piece of wishful code will further specify what I'm looking for:
redirector = Redirector('network_machine')
redirector.start() # from here on, subprocesses are executed on the remote machine
lecroyModule = imp.load_source('module', r'\\network_machine\c$\open_calc.py') # calculator will be opened on remote computer
redirector.stop() # from here on, subprocesses are executed normally (on the local machine)
lecroyModule = imp.load_source('module', r'\\network_machine\c$\open_calc.py') # calculator will be opened on local computer
So basically, I'm asking if anyone knows of a way to implement the Redirector class.

Registry handles leaked?

We're running a Python script (which uses multithreading) to do some work on an Amazon-EC2 based Windows Server 2008 machine. When the machine starts, I can see that it starts executing the Python script, and then I start seeing messages like the following in the event log:
Windows detected your registry file is still in use by other applications or services. The file will be unloaded now. The applications or services that hold your registry file may not function properly afterwards.
DETAIL -
19 user registry handles leaked from \Registry\User\S-1-5-21-2812493808-1934077838-3320662659-500_Classes:
Process 2872 (\Device\HarddiskVolume1\Python27\python.exe) has opened key \REGISTRY\USER\S-1-5-21-2812493808-1934077838-3320662659-500_CLASSES
Process 2844 (\Device\HarddiskVolume1\Python27\python.exe) has opened key \REGISTRY\USER\S-1-5-21-2812493808-1934077838-3320662659-500_CLASSES
Process 2408 (\Device\HarddiskVolume1\Python27\python.exe) has opened key \REGISTRY\USER\S-1-5-21-2812493808-1934077838-3320662659-500_CLASSES
What exactly does this mean, and how do I stop Windows from killing some of the threads?
When a scheduled task is configured to run as a particular user, that user's account is logged on non-interactively in order to run the task. When the task is finished, the user's registry hive is unloaded. For some reason, this is happening prematurely.
From your description, you have a single scheduled task, which launches various subprocesses. It seems likely that the parent process is exiting before the subprocesses are finished, and that this is causing the user's registry hive to be unloaded. You can verify this theory by turning on auditing for process creation and termination (in Group Policy under Advanced Audit Policy Configuration) or by using a tool such as Process Monitor (available from the MS website).
Assuming this is the cause, the fix is for the parent process to wait for the subprocesses to exit before itself exiting; alternatively, depending on your circumstances, it may be sensible for the parent task to simply never exit.
If you don't have direct control over the relationship between the parent process and the subprocesses then you'll need to create a new parent process to launch the script for you, and then either wait for all subprocesses to complete or sleep forever, as appropriate.
It may be that some your files are corrupted. Try the following:
Perform SFC(System file Checker) scan and see if it helps.
Press Windows key + X.
Select Command Prompt(Admin).
Type sfc /scannow and hit enter.
Also perform a chkdsk:
Press Windows Logo + C to open the Charms bar.
Now click Settings and then More PC Settings.
Now click General and then click Restart Now under Advanced Startup.
Now Click Troubleshoot.
Now click Advanced options and select Command prompt.
Type chkdsk /r and hit enter.
Last but not least, if the above doesn't work, you can perform a startup repair:
Press Windows logo + W to open the search box.
Type Advanced Startup options, hit enter.
Then Click Restart Now under Advanced Startup.
Now Click Troubleshoot.
Then click Advanced options and then Automatic Repair.
Hope it helps.

can a python script know that another instance of the same script is running... and then talk to it?

I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?
Specifically, I'd like to enable the following behavior:
"foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
every few minutes the same script is launched again, but with different command-line parameters
when launched, the script should see if any other instances are running.
if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.
So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?
Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.
I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.
More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.
This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.
But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.
BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers.
The Alex Martelli approach of setting up a communications channel is the appropriate one. I would use a multiprocessing.connection.Listener to create a listener, in your choice. Documentation at:
http://docs.python.org/library/multiprocessing.html#multiprocessing-listeners-clients
Rather than using AF_INET (sockets) you may elect to use AF_UNIX for Linux and AF_PIPE for Windows. Hopefully a small "if" wouldn't hurt.
Edit: I guess an example wouldn't hurt. It is a basic one, though.
#!/usr/bin/env python
from multiprocessing.connection import Listener, Client
import socket
from array import array
from sys import argv
def myloop(address):
try:
listener = Listener(*address)
conn = listener.accept()
serve(conn)
except socket.error, e:
conn = Client(*address)
conn.send('this is a client')
conn.send('close')
def serve(conn):
while True:
msg = conn.recv()
if msg.upper() == 'CLOSE':
break
print msg
conn.close()
if __name__ == '__main__':
address = ('/tmp/testipc', 'AF_UNIX')
myloop(address)
This works on OS X, so it needs testing with both Linux and (after substituting the right address) Windows. A lot of caveats exists from a security point, the main one being that conn.recv unpickles its data, so you are almost always better of with recv_bytes.
The general approach is to have the script, on startup, set up a communication channel in a way that's guaranteed to be exclusive (other attempts to set up the same channel fail in a predictable way) so that further instances of the script can detect the first one's running and talk to it.
Your requirements for cross-platform functionality strongly point towards using a socket as the communication channel in question: you can designate a "well known port" that's reserved for your script, say 12345, and open a socket on that port listening to localhost only (127.0.0.1). If the attempt to open that socket fails, because the port in question is "taken", then you can connect to that port number instead, and that will let you communicate with the existing script.
If you're not familiar with socket programming, there's a good HOWTO doc here. You can also look at the relevant chapter in Python in a Nutshell (I'm biased about that one, of course;-).
Perhaps try using sockets for communication?
Sounds like your best bet is sticking with a pid file but have it not only contain the process Id - have it also include the port number that the prior instance is listening on. So when starting up check for the pid file and if present see if a process with that Id is running - if so send your data to it and quit otherwise overwrite the pid file with the current process's info.

Categories

Resources