I'm currently working on a desktop application that makes use of React for frontend and runs through an electron window. Electron needs to be able to communicate with my Python backend. I've found an example online that works fine for running a simple Python script from Electron and returning the result to React.
Electron code that waits for signal from React:
ipcMain.on("REACT_TEST_PYTHON", (event, args) => {
let pyshell = new PythonShell(
path.join(__dirname, "../background/python/test.py"),
{
args: [args.test],
}
);
pyshell.on("message", function (results) {
mainWindow.webContents.send("ELECTRON_TESTED_PYTHON", { tasks: results });
});
})
test.py that is being run by electron:
import sys
data = sys.argv[1]
def factorial(x):
if x == 1 :
return 1
else:
return x * factorial(x-1)
print(factorial( int(data) ))
I completely understand how this works, but for my application the python scripts will not be as simple. Basically, I want electron to create a Python shell that starts a named task. This Python shell should continue running in the background while the frontend works normally. The part I'm stuck on is figuring out how to access data from this initial python shell from a different python shell created by a subsequent signal in electron. If that doesn't make sense this is my intended pipeline:
User clicks React button to start "Task1"
Electron gets signal from React and starts a Python shell to start "Task1". This python shell is running in the background (Electron should not wait for a result to continue processing).
Later on, user decides to click React button to cancel "Task1"
Electron gets signal from React and creates a new Python shell to cancel "Task1". In order to do this the new Python shell needs to access data from the original Python shell.
This new Python shell should also close the original Python shell so that it doesn't continue to try and run "Task1"
What would be the best way to do this?
Some thoughts I've had on how to do this:
Creating a file where necessary data for "Task1" could be written. I think I would need the mmap module in order to use some kind of shared memory between the shells (but I could be wrong). Would also need to figure out how to close the original Python shell.
Somehow saving a reference to the original Python shell which could allow for Electron to cancel "Task1" using the original shell. Not sure if this is possible though since the original Python shell will still be running, I doubt I could access the shell while its in the middle of processing.
Thank you for any help or insight you can provide! I apologize that this may be a confusing question, please let me know if I can clear anything up.
Related
Trying to use rpyc server via progress script, and have the script do different tasks based on different values I'm trying to get from the client.
I'm using a rpyc server to automate some tasks on demand from users, and trying to implement the client in a progress script in this way:
1.progress script launches.
2.progress script calls the rpyc client via cmd, to run a function that checks if the server is live, and returns some sort of var to indicate wether the server is live or not (doesn't really matter to me what kind of indication is used, I guess different chars like 0-live 1-not live would be preferable).
3.based on the value returned in the last step, either notifies the user that the server is down and quits, or proceeds to the rest of the code.
The part I'm struggling with is stage 2, how to call the client in a way that stores a value it should return, and how to actually return the value properly to the script.
I thought about using -param command, but couldn't figure how to use it in my scenario, where the value I'm trying to return is to a script that already midrun, and not just call another progress script with the value.
The code of the client that I use for checking if the server is up is:
def client_check():
c=rpyc.connect(host,18812)
if __name__=="__main__":
try:
client_check()
except:
#some_method_of_transferring_the_indication#
For the progress script, as mentioned I didn't really managed to figure out the right way to call the client and store a value in the way I'm trying to..
I guess I can make the server create a file that will use as an indicator for his status, and check for the file at the start of the script, but I don't know if that's the right way to do so, and prefare to avoid using this if possible.
I am guessing that you are saying that you are shelling out from the Progress script to run your rpyc script as an external process?
In that case something along these lines will read the first line of output from that rpyc script:
define variable result as character no-undo format "x(30)".
input through value( "myrpycScript arg1 arg2" ). /* this runs your rpyc script */
import unformatted result. /* this reads the result */
input close.
display result.
I am trying to write a script to automate my backups under linux and I would like to have some kind of system tray notification (KDE) that a backup is running.
after reading this other SE post and doing some research, I cannot seem to find a DBUS library for bash, so instead I am thinking of tweaking the python script from his answer and making it into an addon for my main backup script by having my bash backup script repeatedly call the python notification script to create, update, and remove the notification when the backup is done.
However, i'm not quite sure how to implement this on the python side since if I were to just call python3 notify.py argument1 argument2 from bash, it would create a new instance of the python script every time.
Essentially, here's what i'm trying to do in my bash script:
#awesome backup script
./notification.py startbackup #this creates a new instance of the python script and sets up the KDE progress bar, possibly returning some kind of ID that is reused later?
#do backup things here.....
#periodically
./notification.py updateProgress 10%
./notification.py updateProgress 20%
#etc...
#finish the backup...
./notification.py endbackup #set the progressbar to complete and do cleanup
Since I havent done anything like this before and am not sure what to search for, I am wondering How I would go about implementing something like this in the python/bash setup I have now?
i.e. if i were to make a bash variable to store an instance ID that was returned from the first call to the python script and pass it back on each subsequent call, how would i have to write my python script in order to handle this and act on the same notification created originally, rather than creating new ones?
Either keep the process running and send commands through a pipe or use a file to store the instance ID.
Original problem
I am creating an API using express that queries a sqlite DB and outputs the result as a PDF using html-pdf module.
The problem is that certain queries might take a long time to process and thus would like to de-couple the actual query call from the node server where express is running, otherwise the API might slow down if several clients are running heavy queries.
My idea to solve this was to decouple the execution of the sqlite query and instead run that on a python script. This script can then be called from the API and thus avoid using node to query the DB.
Current problem
After quickly creating a python script that runs a sqlite query, and calling that from my API using child_process.spawn(), I found out that express seems to get an exit code signal as soon as the python script starts to execute the query.
To confirm this, I created a simple python script that just sleeps in between printing two messages and the problem was isolated.
To reproduce this behavior you can create a python script like this:
print("test 1")
sleep(1)
print("test 2)
Then call it from express like this:
router.get('/async', function(req, res, next) {
var python = child_process.spawn([
'python3'
);
var output = "";
python.stdout.on('data', function(data){
output += data
console.log(output)
});
python.on('close', function(code){
if (code !== 0) {
return res.status(200).send(code)
}
return res.status(200).send(output)
});
});
If you then run the express server and do a GET /async you will get a "1" as the exit code.
However if you comment the sleep(1) line, the server successfully returns
test 1
test 2
as the response.
You can even trigger this using sleep(0).
I have tried flushing the stdout before the sleep, I have also tried piping the result instead of using .on('close') and I have also tried using -u option when calling python (to use unbuffered streams).
None of this has worked, so I'm guessing there's some mechanism baked into express that closes the request as soon as the spawned process sleeps OR finishes (instead of only when finishing).
I also found this answer related to using child_process.fork() but I'm not sure if this would have a different behavior or not and this one is very similar to my issue but has no answer.
Main question
So my question is, why does the python script send an exit signal when doing a sleep() (or in the case of my query script when running cursor.execute(query))?
If my supposition is correct that express closes the request when a spawned process sleeps, is this avoidable?
One potential solution I found suggested the use of ZeroRPC, but I don't see how that would make express keep the connection open.
The only other option I can think of is using something like Kue so that my express API will only need to respond with some sort of job ID, and then Kue will actually spawn the python script and wait for its response, so that I can query the result via some other API endpoint.
Is there something I'm missing?
Edit:
AllTheTime's comment is correct regarding the sleep issue. After I added from time import sleep it worked. However my sqlite script is not working yet.
As it turns out AllTheTime was indeed correct.
The problem was that in my python script I was loading a config.json file, which was loaded correctly when called from the console because the path was relative to the script.
However when calling it from node, the relative path was no longer correct.
After fixing the path it worked exactly as expected.
So first off, overall what I'm trying to accomplish is for a base machine (as in a VPS) to run automated task through Firefox using Python.
Now the object or goal is to have Firefox run the given tasks in the browser itself, though then connect to a VPS (through the browser) using a VNC connection, and control or issue tasks as well to that VPS (this is the part I'm having trouble with); with also as little memory required as possible for maximum efficiency.
To give an example, if you've used Digital Ocean, you can view your VPS's specific screen or terminal within the current browser.
To be clear, the VPS OS I'm using to run the base process is Linux, though the VPS that the program is connecting to (through the browser) is using a Windows OS. Something such as this let's say (note I didn't screenshot this):
My problem lies with that after running through all of the scripted tasks using Selenium in Python (with Firefox), once I open up the VPS in the browser, I can't figure out how to access it properly or issue jobs to be completed.
I've thought about maybe using (x,y) coordinates for mouse clicks, though I can't say this would exactly work (I tested it with iMacros, though not yet Selenium).
So in a nutshell, I'm running base tasks in Firefox to start, and then connecting to a VPS, and finally issuing more tasks to be completed from Firefox to that VPS that's using a Windows OS environment.
Suggestions on how to make this process simpler, more efficient, or further its reliability?
There is a class in java called Robot class which can handle almost all keyboard operation
There is a similer thing present in python gtk.gdk.Display.
Refer below:-
Is there a Python equivalent to Java's AWT Robot class?
Take a screenshot via a python script. [Linux]
OR
Python ctypes keybd_event simulate ctrl+alt+delete
Demo java code:-
try{
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_ALT);
robot.keyPress(KeyEvent.VK_DELETE);
robot.keyRelease(KeyEvent.VK_CONTROL);
robot.keyRelease(KeyEvent.VK_ALT);
robot.keyRelease(KeyEvent.VK_DELETE);
}
catch(Exception ex)
{
System.out.println(ex.getMessage());
}
Hope it will help you :)
Lets say I have a view page(request) which loads page.html.
Now after successfully loading page.html, I want to automatically run a python script behind the scene 10 - 15 sec after the page.html loaded. How it is possible?
Also, is it possible to show the status of the script dynamically (running/ stopped/ Syntax Error..etc)
Runing a script from the javascript is not a clean way to do it, because the user can close the browser, disable js ... etc. instead you can use django-celery, it let you run backgroud scripts and you can check to status of the script dynamically from a middleware. Good luck
You could add a client-side timeout to AJAX back to the server 10-15 sec later. Point it to a different view and execute your script within that view. For example:
function runServerScript() {
$.get("/yourviewurlhere", function(data) {
// Do something with the return data
});
}
setTimeout("runServerScript()", 10000);
If you want status to be displayed, the client would have to make multiple requests back to the server.
Celery might come in handy for such use cases. You can start a task (or script as you call them) from a view (even with a delay, as you want). Sending status reports back to the browser will be harder unless you opt for something like WebSockets but that's highly experimental right now.