I'm running a python script using the task scheduler. The script gets data from a database using pandas and stores them in csv.gz files.
It's running properly but there is a difference in data size when the Task Scheduler runs automatically and when it is run manually via Task scheduler run button. The manual run gets all the data, whereas automated run gets lesser data.
It runs the same script. I've checked multiple times. But unable to identify the issue.
PS:
Using Windows Server 2008, pymssql to connect to Database
Related
Newbie with Azure Batch here.
I have created a pipeline in ADF to run a python script to call an API and do some data manipulations. I call this script by using a "Custom" activity through Azure Batch.
Tested this locally and the Python script itself runs and finishes perfectly. Through Azure, I can see that the script has run due to the CPU usage in the database. However, the script never finishes and goes to the next step:
The script code:
The script is doing a cURL POST request to an interface which runs some SQL on the background. This should take roughly 15 minutes and finishes correctly if running locally.
If I'm looking at my pool in Azure Batch, I can see that I have 1 node in a running state at 100%.
After 12 hours, I'm getting this message from ADF:
How can I troubleshoot what is going on? Why is my script not finishing?
I want to keep my Python scripts running forever on Windows Server 2012.
I tried using MS Windows Task Scheduler, but it keeps creating new instances of the script every time and hence, fills my memory. Currently I run my scripts through command prompt and keep them minimized, and I never log out from the Server.
Any help is really appreciated.
You could use https://learn.microsoft.com/en-US/windows-server/administration/windows-commands/sc-create to create a service then use Scheduled Tasks to control it.
I have a project in which one of the tests consists of running a process indefinitely in order to collect data on the program execution.
It's a Python script that runs locally on a Linux machine, but I'd like for other people in my team to have access to it as well because there are specific moments where the process needs to be restarted.
Is there a way to set up a workflow on this machine that when dispatched, stops and restarts the process?
You can execute commands on your Linux host via GH Actions and SSH. Take a look at this action.
So I am trying to set up a Continuous integration environment using Jenkins.
One of the build step requires a series of mouse actions/movements to accomplish a task in Excel. I have already written a python script using the ctypes library to do this.
The script works perfectly fine if I run it either through Jenkins or on the server itself when I am actively logged in to the server using remote desktop connection, but as soon as I minimize/close the connection and then run the script from Jenkins, it seems the mouse events never get executed. Is there something I can add to the script to make this work? Thanks for any help you can provide.
I have a Windows scheduled task which does some database updates and it runs every few hours.
I have a Python script which updates data in the same database, and it is called by the users.
I want to prevent data conflict between the scheduled task and Python script. When a user calls the Python script, how can I check if the scheduled task is running, so that the Python script delays updating the database until the scheduled task is finished? Or would it be easier to check if the database is being updated (how would I do this in Python script?)?
I am running Python 2.7 64-bit, Windows Server 2012 64-bit and SQLServer 2012 64-bit.
Thanks in advance.