pynetdicom not working correctly with Windows Task Scheduler - python

I am using a modified version of this pynetdicom script (the second example on this page) to download DICOM images to an office computer. Here is what the script does:
Opens a connection with PACS
Searches for DICOM images that match the current date for a given patient's medical record number and accession number.
If DICOM images are found that match the given criteria, then an SCP server connection is started to initiate the downloading of images to a folder on the local computer.
The script works when it is run using the Spyder IDE. I have created a scheduled task with Windows Task Scheduler and it works correctly if the script has first been run with the Spyder IDE and only if Spyder remains open and all variables have not been cleared.
However, if Spyder is closed or the Spyder kernel is restarted, then when the script is run through task scheduler, it will run correctly until it gets to the part where the SCP server calls the handle_store function that downloads images from PACS. The script does not call the handle_store function and the connection eventually times out.
I thought the solution would be changing the default working directory in Task Scheduler, but that did not work. Any ideas what is going on and how to fix this?

Okay, I did some more digging and found the source of the problem. In order to get images to download to my computer Python needs to be allowed through our corporate firewall. I had already allowed pythonw.exe through the firewall but not python.exe. Once both of these Python files were allowed through the firewall the script will run as expected when started with Windows Task Scheduler.

Related

Scheduled Python script in task scheduler not working

I have a python script that I am trying to schedule to run in the task scheduler in my VM but it doesn't seem to be running, it returns (0x2) for last run result. I am able to run the script manually and it works. I even created a batch file to execute the script which works and tried scheduling that in Task Scheduler but it also gave the same error. My only guess is that it's not working because it uses the Google Sheets API and reads the credentials from a JSON file in the project folder but I'm still unsure as to why it wouldn't run when scheduled. If you have any ideas I would greatly appreciate it. In the task scheduler, I am using the path Z:\Python\PythonGSAPI\executePy.bat to execute the batch file. The content of the batch file is
#echo off
"C:\Python27\python.exe" "Z:\Python\PythonGSAPI\TF_Invoice.py"
pause
This occur due the PATH Enviroment variable, for exemple, if you use Anaconda Python it's needed to choose the first option during the installation or even configure this after.
enter image description here

How to access terminal Python process running on server from other script

Consider situation:
I have an Ubuntu server with installed Python, tensorflow and other libs.
My code is python script, that load several models, some of them pretrained vectors .bin, some files from server folders, etc.
When i run script in terminal it launch interactive session, where i input some text and script output me back (like chatbot). During answer it call my Ai models (Tensorflow, keras).
Question: how do i access this running session from other python script? I mean i want use it as a function: to send text and receive answer back.
And of course i need to run this terminal session in background for long time.
I read this and similar answers, but not sure is that right solution (seems not a full):
In Linux, how to prevent a background process from being stopped after closing SSH client
What i am asking, commonly is done by REST server with API that expose and then this api is called from a external code. But there is no API wotking: Tensorflow throw errors when run via Flask (was not able to fix).
If you want your script stays up after closing ssh session, add & disown at the end of your execution command and it will run in background.

Run Spyder /Python on remote server

So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)

Is it Possible to Run a Python Code Forever?

I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible?
As mentioned by Jon and Vincent, it's better to run the code from a cloud service. But either way, I think what you're looking for is what to put into the terminal to run the code even after you close the terminal. This is what worked for me:
nohup python code.py &
You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance

How can I monitor a python scrypt and restart it in the event of a crash? (Windows)

I have a simple python script to send data from a Windows 7 box to a remote computer via SFTP. The script is set to continuously send a single file every 5 minutes. This all works fine but I'm worried about the off chance that the process stops or fails and the customer doesn't notice the data files have stopped coming in. I've found several ways to monitor python processes in a ubuntu/unix environment but nothing for Windows.
If there are no other mitigating factors in your design or requirements, my suggestion would be to simplify the script so that it doesn't do the polling; it simply sends the file when invoked, and use Windows Scheduler to invoke the script on whatever schedule you need. By relying on a core Windows service, you can factor that complexity out of your script.
You can check out restartme the following link shows how you can use it
http://www.howtogeek.com/130665/quickly-and-automatically-restart-a-windows-program-when-it-crashes/

Categories

Resources