I have a bash script, in Python that runs on a Ubuntu server. Today, I mistakenly closed the Putty window after monitoring that the script ran correctly.
There is some usefull information that was printed during the scrip running and I would like to recover them.
Is there a directory, like /var/log/syslog for system logs, for Python logs?
This scripts takes 24 hours to run, on a very costly AWS EC2 instance, and running it again is not an option.
Yes, I should have printed usefull information to a log file myself, from the python script, but no, I did not do that.
Unless the script has an internal logging mechanism like e.g. using logging as mentioned in the comments, the output will have been written to /dev/stdout or /dev/stderr respectively, in which case, if you did not log the respective data streams to a file for persistent storage by using e.g. tee, your output is lost.
Related
I try to run all the rules by followed commands:
touch scripts/*.py
snakemake --cores <YOUR NUMBER>
The problem is my local internet connection is unstable, I could submit the command through putty to the linux computation platform, while it seems that there're always outputs returns back to the putty interface. So when my local internet connection is interrupted, the code running is also interrupted.
Is there any methods that I could let the codes just run on the linux backends? Then outputs could be written in the log file at last.
This could be a very basic question.
This is a common problem (not just for snakemake), and there are several options, at least the following:
use a program that can persists across multiple connection: popular options are screen, tmux. The workflow would look like this: log on to the server, launch screen or tmux, once inside the program launch the code you would like to run, log off, next time you login to the server, you can reconnect to the previous session and observe computations that were done in the meantime. I recommend tmux, see this tmux tutorial.
use nohup, this launches the computation in the background and it will continue running on the server if you disconnect:
nohup snakemake --cores <YOUR NUMBER>
Note that with this option, if you want to see the progress of computation, you will need to watch the appropriate .log inside the .snakemake folder.
So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)
I'm having some problem making a python file run everytime the AWS server boots.
I am trying to run a python file to start a web server on Amazon Webservice EC2 server.
But I am limited to edit systemd folder and other folders such as init.d
Is there anything wrong?
Sorry I don't really understand EC2's OS, it seems a lot of methods are not working on it.
What I usually do via ssh to start my server is:
python hello.py
Can anyone tell me how to run this file automatically every time system reboots?
It depends on your linux OS but you are on the right track (init.d). This is exactly where you'd want to run arbitrary shell scripts on start up.
Here is a great HOWTO and explanation:
https://www.tldp.org/HOWTO/HighQuality-Apps-HOWTO/boot.html
and another stack overflow specific to running a python script:
Run Python script at startup in Ubuntu
if you want to share you linux OS I can be more specific.
EDIT: This may help, looks like they have some sort of launch wizard:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances using the command line tools), or as
base64-encoded text (for API calls).
I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible?
As mentioned by Jon and Vincent, it's better to run the code from a cloud service. But either way, I think what you're looking for is what to put into the terminal to run the code even after you close the terminal. This is what worked for me:
nohup python code.py &
You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance
I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.