Communicating between multiple python programs running in parallel - python

Here's a summary of my setup:
3 axis CNC, controllable via a python script running on raspberry pi
Windows PC can connect to the pi can run a script
The end goal is for a UI made in C# to initiate an automated test cycle for the CNC to run in. In the python program there is a Cnc object that stores the devices current position and contains methods to position it to a certain place.
The problem is if i run a new script every time i want to move the CNC, I have to re initialize the Cnc instance and it will forget it's position. So I'm wondering if I can have one master program running that contains the one and only Cnc instance, then when the remote machine wants to tell the CNC to move it can run a different script with argz for the new position python action.py x y z. This script could then communicate to the master program to call the move method to the appropriate location without ever having to re-construct a new Cnc object.
Then ideally the master program would indicate when the motion is completed and send back a message to the "action" script, that script would output something to tell the remote system that the action is completed, then it would exit, ready to be called again with new argz.
In the end the remote system is highly abstracted from any of the working and just needs to start running the master, then run the move script with argz any time it wants to perform a motion.
Note:
My other idea was to just save a text file with the current position then always re initialize the instance with the info in the file.
EDIT: SOLVED... sort of
handler.py
The handler will continuously read from a text file named input.txt looking for a new integer. If received it will update a text file named output.txt to read '0', then do some action with the input (i.e move a cnc) then write the value '1' to output.txt.
from time import sleep
cur_pos=0
while True:
with open("input.txt","r") as f:
pos=f.readline()
try:
if pos=='':
pass
else:
pos=int(pos)
except:
print pos
print("exititng...")
exit()
if cur_pos==pos or pos=='':
#suggestion from #Todd W to sleep before next read
sleep(0.3)
pass
else:
print("Current pos: {0:d}, New pos: {1:d}".format(cur_pos,pos))
print("Updating...")
with open("output.txt","w") as f:
f.write("0")
#do some computation with the data
sleep(2)
cur_pos=pos
print("Current pos: {0:d}".format(cur_pos))
with open("output.txt","w") as f:
f.write("1")
pass_action.py
The action passer will accept a command line argument, write it to input.txt, then wait for output.txt to read '1', after which it will print done and exit.
import sys
from time import sleep
newpos=sys.argv[1]
with open("input.txt","w") as f:
f.write(newpos)
while True:
sleep(0.1)
with open("output.txt","r") as f:
if f.readline()=='1':
break
sys.stdout.write("done")

One possible approach might just be to make your main python script a webapp using something like Flask or Bottle. Your app initializes the cnc, then waits for HTTP input, maybe on an endpoint like 'move'. Your C# app then just sends a REST request (HTTP) to move like {'coordinates': [10,15]} and your app acts on it.

If you really want to be dead simple, have your "main" CNC script read a designated directory on the file system looking for a text file that has one or more commands. If multiple files are there, take the earliest file and execute the command(s). Then delete the file (or move it to another directory) and get the next file. If there's no file, sleep for a few seconds and check again. Repeat ad nauseam. Then your C# app just has to write a command file to the correct directory.

Your better bet is to combine gevent with GIPC https://gehrcke.de/gipc/
This allows for asynchronous calls to a stack, and communication between separate processes.

Related

Using pwntools to interact with executable just halts on receive

I have a c exectuable that I want to exploit.
The output of that file looks like this:
$ ./vuln_nostack
Enter some text:
enteringTEXT
You entered: enteringTEXT
You enter some text, and the program spits it back.
I want to run this prorgam (and later exploit it) with python and pwntools.
So far, the functioning part of my pwntools program looks like this:
from concurrent.futures import process
from sys import stdout
from pwn import *
import time
pty = process.PTY
p = process("./vuln_nostack", stdin=pty, stdout=pty)
ss = p.recv()
p.clean()
asstring = ss.decode("utf-8")
print(asstring)
This works fine, it gets the first line and then prints it.
What I want to do now is to send a message to the program and then get the final line.
I have tried something along these lines:
p.send(b"dong")
p.clean()
print(p.recv())
I'm not sure whether or not the send actually ever sends anything, but as soon as I add the recv function, the prorgam just hangs and never finishes.
My guess is that the input to the executable is never given properly, and therefore it's still just waiting.
How do I actually get a message delivered to the exectuable so that it can move on and srever me the last line?
You can also use p.sendline():
p.sendline("payload")
This automatically adds a breakline after your bytes.
Moreover, to know whether your exploit is sending/receiving messages to/from the program, you can use debug context by adding this assignment:
context.log_level = 'debug'
The answer was a lot more simple than formerly presumed.
I just needed a breakline in the send:
p.send("payload \n")

How to make a python script stopable from another script?

TL;DR: If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time? (Without KeyboardInterrupt or killing the task)
--
I've recently posted this question: How to make my code stopable? (Not killing/interrupting)
The answers did address my question, but from a termination/interruption point of view, and that's not really what I wanted. (Although, my question didn't made that clear)
So, I'm rephrasing it.
I created a generic script for example purposes. So I have this class, that gathers data from a generic API and write the data into a csv. The code is started by typing python main.py on a terminal window.
import time,csv
import GenericAPI
class GenericDataCollector:
def __init__(self):
self.generic_api = GenericAPI()
self.loop_control = True
def collect_data(self):
while self.loop_control: #Can this var be changed from outside of the class? (Maybe one solution)
data = self.generic_api.fetch_data() #Returns a JSON with some data
self.write_on_csv(data)
time.sleep(1)
def write_on_csv(self, data):
with open('file.csv','wt') as f:
writer = csv.writer(f)
writer.writerow(data)
def run():
obj = GenericDataCollector()
obj.collect_data()
if __name__ == "__main__":
run()
The script is supposed to run forever OR until I command it to stop. I know I can just KeyboardInterrupt (Ctrl+C) or abruptly kill the task. That isn't what I'm looking for. I want a "soft" way to tell the script it's time to stop, not only because interruption can be unpredictable, but it's also a harsh way to stop.
If that script was running on a docker container (for example) you wouldn't be able to Ctrl+C unless you happen to be in the terminal/bash inside the docker.
Or another situation: If that script was made for a customer, I don't think it's ok to tell the customer, just use Ctrl+C/kill the task to stop it. Definitely counterintuitive, especially if it's a non tech person.
I'm looking for way to code another script (assuming that's a possible solution) that would change to False the attribute obj.loop_control, finishing the loop once it's completed. Something that could be run by typing on a (different) terminal python stop_script.py.
It doesn't, necessarily, needs to be this way. Other solutions are also acceptable, as long it doesn't involve KeyboardInterrupt or Killing tasks. If I could use a method inside the class, that would be great, as long I can call it from another terminal/script.
Is there a way to do this?
If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time?
In general, there are two main ways of doing this (as far as I can see). The first one would be to make your script check some condition that can be modified from outside (like the existence or the content of some file/socket). Or as #Green Cloak Guy stated, using pipes which is one form of interprocess communication.
The second one would be to use the built in mechanism for interprocess communication called signals that exists in every OS where python runs. When the user presses Ctrl+C the terminal sends a specific signal to the process in the foreground. But you can send the same (or another) signal programmatically (i.e. from another script).
Reading the answers to your other question I would say that what is missing to address this one is a way to send the appropriate signal to your already running process. Essentially this can be done by using the os.kill() function. Note that although the function is called 'kill' it can send any signal (not only SIGKILL).
In order for this to work you need to have the process id of the running process. A commonly used way of knowing this is making your script save its process id when it launches into a file stored in a common location. To get the current process id you can use the os.getpid() function.
So summarizing I'd say that the steps to achieve what you want would be:
Modify your current script to store its process id (obtainable by using os.getpid()) into a file in a common location, for example /tmp/myscript.pid. Note that if you want your script to be protable you will need to address this in a way that works in non-unix like OSs like Windows.
Choose one signal (typically SIGINT or SIGSTOP or SIGTERM) and modify your script to register a custom handler using signal.signal() that addresses the graceful termination of your script.
Create another (note that it could be the same script with some command line paramater) script that reads the process id from the known file (aka /tmp/myscript.pid) and sends the chosen signal to that process using os.kill().
Note that an advantage of using signals to achieve this instead of an external way (files, pipes, etc.) is that the user can still press Ctrl+C (if you chose SIGINT) and that will produce the same behavior as the 'stop script' would.
What you're really looking for is any way to send a signal from one program to another, independent, program. One way to do this would be to use an inter-process pipe. Python has a module for this (which does, admittedly, seem to require a POSIX-compliant shell, but most major operating systems should provide that).
What you'll have to do is agree on a filepath beforehand between your running-program (let's say main.py) and your stopping-program (let's say stop.sh). Then you might make the main program run until someone inputs something to that pipe:
import pipes
...
t = pipes.Template()
# create a pipe in the first place
t.open("/tmp/pipefile", "w")
# create a lasting pipe to read from that
pipefile = t.open("/tmp/pipefile", "r")
...
And now, inside your program, change your loop condition to "as long as there's no input from this file - unless someone writes something to it, .read() will return an empty string:
while not pipefile.read():
# do stuff
To stop it, you put another file or script or something that will write to that file. This is easiest to do with a shell script:
#!/usr/bin/env sh
echo STOP >> /tmp/pipefile
which, if you're containerizing this, you could put in /usr/bin and name it stop, give it at least 0111 permissions, and tell your user "to stop the program, just do docker exec containername stop".
(using >> instead of > is important because we just want to append to the pipe, not to overwrite it).
Proof of concept on my python console:
>>> import pipes
>>> t = pipes.Template()
>>> t.open("/tmp/file1", "w")
<_io.TextIOWrapper name='/tmp/file1' mode='w' encoding='UTF-8'>
>>> pipefile = t.open("/tmp/file1", "r")
>>> i = 0
>>> while not pipefile.read():
... i += 1
...
At this point I go to a different terminal tab and do
$ echo "Stop" >> /tmp/file1
then I go back to my python tab, and the while loop is no longer executing, so I can check what happened to i while I was gone.
>>> print(i)
1704312

Python - Network WMI remotely run exe, and grab the text result

I have a python project called the "Remote Dongle Reader". There are about 200 machines that have a "Dongle" attached, and a corresponding .exe called "Dongle Manager". Running the Dongle Manager spits out a "Scan" .txt file with information from the dongle.
I am trying to write a script, which runs from a central location, which has administrative domain access to the entire network. It will read a list of hostnames, go through each one, and bring back all the files. Once it brings back all the files, it will compile to a csv.
I have it working on my Lab/Test servers, but in production systems, it does nto work. I am wondering if this is some sort of login issue since people may be actively using the system. THe process needs to launch silently, and do everything int he background. However since I am connecting to the administrator user, I wonder if there is a clash.
I am not sure what's going on other than tge application works up until the point I expect the file to be there. The "Dongle Manager" process starts, but it doesnt appear to be spitting the scan out on any machine not logged in as administrator (the account I am running off of).
Below is the snippet of the WMI section of the code. This was a very quick script so I apoliogize for any non pythonic statements.
c = wmi.WMI(ip, user=username, password=password)
process_startup = c.Win32_ProcessStartup.new()
process_startup.ShowWindow = SW_SHOWNORMAL
cmd = r'C:\Program Files\Avid\Utilities\DongleManager\DongleManager.exe'
process_id, result = c.Win32_Process.Create(CommandLine=cmd,
ProcessStartupInformation=process_startup)
if result == 0:
print("Process started successfully: %d" % process_id)
else:
print("Problem creating process: %d" % result)
while not os.path.exists(("A:/"+scan_folder)):
time.sleep(1)
counter += 1
if counter > 20:
failed.append(hostname)
print("A:/"+scan_folder+"does not exist")
return
time.sleep(4)
scan_list = os.listdir("A:/"+scan_folder)
scan_list.sort(key=lambda x: os.stat(os.path.join("A:/"+scan_folder, x)).st_mtime, reverse=True)
if scan_list is []:
failed.append(hostname)
return
recursive_overwrite("A:/"+scan_folder+"/"+scan_list[0],
"C:\\AvidTemp\\Dongles\\"+hostname+".txt")
Assuming I get a connection (computer on), it usually fails at the point where it either waits for teh folder to be created, or expects something in the list of scan_folder... either way, something is stopping the scan from being created, even though the process is starting
Edit, I am mounting as A:/ elsewhere in the code
The problem is that you've requested to show the application window but there is no logged on desktop to display it. WMI examples frequently use SW_SHOWWINDOW but that's usually the wrong choice because with WMI you are typically trying to run something in the background. In that case, SW_HIDE (or nothing) is the better choice.

enable a script to stop itself when a command is issued from terminal

I have a script runReports.py that is executed every night. Suppose for some reason the script takes too long to execute, I want to be able to stop it from terminal by issuing a command like ./runReports.py stop.
I tried to implement this by having the script to create a temporary file when the stop command is issued.
The script checks for existence of this file before running each report.
If the file is there the script stops executing, else it continues.
But I am not able to find a way to make the issuer of the stop command aware that the script has stopped successfully. Something along the following lines:
$ ./runReports.py stop
Stopping runReports...
runReports.py stopped successfully.
How to achieve this?
For example if your script runs in loop, you can catch signal http://en.wikipedia.org/wiki/Unix_signal and terminate process:
import signal
class SimpleReport(BaseReport):
def __init__(self):
...
is_running = True
def _signal_handler(self, signum, frame):
is_running = False
def run(self):
signal.signal(signal.SIGUSR1, self._signal_handler) # set signal handler
...
while is_running:
print("Preparing report")
print("Exiting ...")
To terminate process just call kill -SIGUSR1 procId
You want to achieve inter process communication. You should first explore the different ways to do that : system V IPC (memory, very versatile, possibly baffling API), sockets (including unix domain sockets)(memory, more limited, clean API), file system (persistent on disk, almost architecture independent), and choose yours.
As you are asking about files, there are still two ways to communicate using files : either using file content (feature rich, harder to implement), or simply file presence. But the problem using files, is that is a program terminates because of an error, it may not be able to write its ended status on the disk.
IMHO, you should clearly define what are your requirements before choosing file system based communication (testing the end of a program is not really what it is best at) unless you also need architecture independence.
To directly answer your question, the only reliable way to know if a program has ended if you use file system communication is to browse the list of currently active processes, and the simplest way is IMHO to use ps -e in a subprocess.
Instead of having a temporary file, you could have a permanent file(config.txt) that has some tags in it and check if the tag 'running = True'.
To achieve this is quiet simple, if your code has a loop in it (I imagine it does), just make a function/method that branches a check condition on this file.
def continue_running():
with open("config.txt") as f:
for line in f:
tag, condition = line.split(" = ")
if tag == "running" and condition == "True":
return True
return False
In your script you will do this:
while True: # or your terminal condition
if continue_running():
# your regular code goes here
else:
break
So all you have to do to stop the loop in the script is change the 'running' to anything but "True".

How do you check when a file is done being copied in Python?

I'd like to figure out a way to alert a python script that a file is done copying. Here is the scenario:
A folder, to_print is being watched by the script by constantly polling with os.listdir().
Every time os.listdir() returns a list of files in which a file exists that hasn't been seen before, the script performs some operations on that file, which include opening it and manipulating its contents.
This is fine when the file is small, and copying the file from its original source to the directory being watched takes less time than the amount of time remaining until the next poll by os.listdir(). However, if a file is polled and found, but it is still in the process of being copied, then the file contents are corrupt when the script tries to act on it.
Instead, I'd like to be able to (using os.stat or otherwise) know that a file is currently being copied, and wait for it to be done until I act on it if so.
My current idea is to use os.stat() every time I find a new file, then wait until the next poll and compare the date modified/created time since the last time I polled, and if they remain the same then that file is "stable", otherwise keep polling until it is. I'm not sure this will work though as I am not too familiar with how Linux/Unix updates these values.
Try inotify.
This is a Linux standard for watching files. For your use-case the event IN_CLOSE_WRITE seems to be promising. There is a Python library for inotify. A very simple example (taken from there). You'll need to modify it to catch only IN_CLOSE_WRITE events.
# Example: loops monitoring events forever.
#
import pyinotify
# Instanciate a new WatchManager (will be used to store watches).
wm = pyinotify.WatchManager()
# Associate this WatchManager with a Notifier (will be used to report and
# process events).
notifier = pyinotify.Notifier(wm)
# Add a new watch on /tmp for ALL_EVENTS.
wm.add_watch('/tmp', pyinotify.ALL_EVENTS) # <-- replace by IN_CLOSE_WRITE
# Loop forever and handle events.
notifier.loop()
Here is an extensive API documentation: http://seb-m.github.com/pyinotify/
Since the files can be copied within the poll interval, just process the new files found by the last poll before checking for new files. In other words, instead of this:
while True:
newfiles = check_for_new_files()
process(newfiles)
time.sleep(pollinterval)
Do this:
newfiles = []
while True:
process(newfiles)
newfiles = check_for_new_files()
time.sleep(pollinterval)
Or just put the wait in the middle of the loop (same effect really):
while True:
newfiles = check_for_new_files()
time.sleep(pollinterval)
process(newfiles)

Categories

Resources