I have a python windows service running in the background on a shared computer. I would like to know who is bad mannered enough to kill my processes without asking.
Is there a way to know which user kills a given service ?
Started reading here, but didn't find what i was looking for.
http://timgolden.me.uk/python/win32_how_do_i/track-session-events.html
Thanks for your help.
There is an interesting article I found on this in the following link:
http://bugslasher.net/2011/04/17/who-the-hell-killed-my-process/.
I have not tried this yet, but would be interested to hear how you get on.
UPDATE
In summary you can download debugging tools for Windows (available here)
This includes a version of GFlags (global flags)
GFlags can be configured to log data when processes are ended (via silent process exit tab)
Now you should be able to view details of killed processes (including the perpetrator) in event manager
Related
I have a CLI application which is executed via Wine on Linux as it needs some closed source DLLs which are only available for Windows. However I also have another tool which is much easier to compile/run on Linux. That Linux application communicates via STDIN/STDOUT.
So I want to spawn a native Linux process from Wine, pass some data (ideally via stdin), wait for the process to complete and read its result (ideally via stdout). This is trivial if both processes would be run in the same OS environment (pure Linux/Posix/Windows) but more complicated in my case.
I can spawn a Linux process using popen but I can't get its stdout (always getting an empty string).
I understand that Wine itself won't/can't provide blocking process creation (probably this creates a lot of edge cases when trying to maintain Windows semantics) as detailed in Wine bug 18335, stackoverflow answer "Execute Shell Commands from Program running in WINE".
However the Wine process is still running under Linux so I think it should be possible to somehow tap into Linux's (= kernel) functionality and do a blocking read.
Does anyone have some pointers on how to launch a Linux process and get its stdout from Wine?
Any other ideas on how to do IPC without complicated server installs?
Theoretically I could use to file system and wait for a result file to appear or run a TCP/HTTP server for communication. Ideally the input is only accessible for the launched application without a server port which every application on the same host can access.
I read about "winelib" as a way to access native Unix functionality from "Windows" programs but I'm not sure I fully grasp how to use it and if it helps me (I can adapt the Wine program but as I mentioned earlier I need to access some closed source DLLs which I can not modify).
Edit: I just noticed the zugbruecke library which allows to communicate with a Windows DLL from (Unix) Python (via a custom wine+TCP connection from Python's multiprocesing). I can not use that as-is (my DLL library uses a lot of pointers so I have wrapped it via pybind11) and it would mean I have to rework my application a bit. However it might result in an elegant solution where the Windows bits are more isolated and I can have more Linux fun. :-)
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
Create a service that runs permanently.
Arrange for the service to have an IPC communications channel.
From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use CreateProcessAsUser or similar APIs.
You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) )
To run a file with system account privileges, you can use psexec. Download this :
Sysinternals
Then you may use :
os.system
or
subprocess.call
And execute:
PSEXEC -i -s -d CMD "path\to\yourfile"
Just came across this one - I know, a bit late, but anyway. I encountered a similar situation and I solved it with NSSM (Non_Sucking Service Manager). Basically, this program enables you to start any executable as a service, which I did with my Python executable and gave it the Python script I was testing on as a parameter.
So I could run the service and edit the script however I wanted. I just had to restart the service when I made any changes to the script.
One point for productive environments: Try not to rely on third party software like NSSM. You could also achieve this with the standard SC command (see this answer) or PowerShell (see this MS doc).
I have an interactive console application and I need to work with it using Python (send commands and receive output). The application is started by another one, I can't start it from Python script.
Is it possible to connect to already running console application and get access to its stdin/stdout?
Ideally the solution should work both in Windows and Unix, but just Windows version would also be helpful. Currently I am using the solution found here
http://code.activestate.com/recipes/440554/
but it doesn't allow connecting to existing process.
Thanks for any input,
Have you considered using sockets since they are straight forward for simple/streaming. They are also platform independent.
The most critical point is thread safety where having to pass IO streams between threads/processes tends to be hectic.
If on the other hand you use a socket, a lot can be communicated without adding too much complexity to how the processes work(coding an error prone RPC for instance).
try Documentation or
example
I'd like to write a python script to perform some very simple "agentless" monitoring of remote processes running on linux servers.
It would perform the following tasks, in psuedocode:
for each remoteIPAddress in listOfIPAddresses:
log into server#remoteIPAddress via ssh
execute the equivalent of a 'ps -ef' command
grep the result to make sure a particular process (by name) is still running
One way to do this is to have python call shell scripts in a subprocess and parse their output.
That seems pretty inefficient. Is there a better way to do this via python libraries?
All I could find via research here and elsewhere was:
psutil - looks like it doesn't do remote monitoring, so I'd have to run agents on the remote machines to report stats back via RPC.
pymeter - I would have to write my own plugin for monitoring a specific remote service.
stackoverflow #4546492 - Some helpful links but the poster was looking for a different solution.
Thanks, and please go easy on me, it's my first question :-)
The Fabric library may be of interest to you.
Check out paramiko. You can use it to ssh into the server and run commands. You can then parse the results and do what you'd like with them.
Taking cues from the answers above, I investigated Fabric and found the following presentation particularly interesting/helpful. It is an overview of three libraries -- Fabric, Cuisine, and Watchdog -- for server monitoring and administration.
For posterity:
Using Fabric, Cuisine, and Watchdog for server administration in Python
It might be heavier than what you're looking for, but Zenoss supports agentless monitoring.
paramiko and Fabric, suggested in the other answers, are great options too.
Why don't you use a dedicated monitoring tool like Nagios ?
Nagios has agent and agent less monitoring through NRPE plugins and SSH plugins etc.
Try it out.
Few days ago I found out that my webapp wrote ontop of the tornadoweb framework doesn't stop or restart via upstart. Upstart just hangs and doesn't do anything.
I investigated the issue and found that upstart recieves wrong PID, so it can only run once my webapp daemon and can't do anything else.
Strace shows that my daemon makes 4 (!) clone() calls instead of 2.
Week ago anything was good and webapp was fully and correctly managed by the upstart.
OS is Ubuntu 10.04.03 LTS (as it was weeks ago).
Do you have any ideas how to fix it?
PS: I know about "expect fork|daemon" directive, it changes nothing ;)
Sorry my silence, please.
Investigation of the issue ended with the knowledge about uuid python library which adds 2 forks to my daemon. I get rid of this lib and tornado daemon works now properly.
Alternative answer was supervisord which can run any console tools as a daemon which can't daemonize by itself.
There are two often used solutions
The first one is to let your application honestly report its pid. If you could force your application to write the actual pid into the pidfile then you could get its pid from there.
The second one is a little more complicated. You may add specific environment variable for the script invocation. This environment variable will stay with all the forks if forks don't clear environment and than you can find all of your processes by parsing /proc/*/environ files.
There should be easier solution for finding processes by their environment but I'm not sure.