So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)
Related
I know how to run a Python script made locally on a remote server and have seen a lot of questions in that regard. But I am in a situation where I cannot install python packages on the remote server I am accessing. Specifically, I need to use pypostal, which requires libpostal to be installed and I cannot do so. Moreover, I need pyspark to play with Hive tables.
Therefore, I need the script to run locally, where I can manage my packages and everything executes fine, but certain commands need to access the server in order to grab data. For example, using pyspark to get Hive tables on the server into a local dataframe. Essentially, I need all the Python to be executed using my local distribution with my local packages but perform its actions on the remote server.
I have looked into things like paramiko. But as far as I can workout, is just like an SSH client, which would use the Python distro on the remote server and not locally. Though, perhaps I don't understand how to use it properly.
I am running python 3.6 on Ubuntu 18.04 using WSL. The packages I am using are pandas, numpy, pyspark, and postal (subsequently libpostal).
TLDR;
Is it possible to run a script locally, have parts of it execute remotely but using my local Python? Or if there are other possible solutions, I would be grateful.
I'm having some problem making a python file run everytime the AWS server boots.
I am trying to run a python file to start a web server on Amazon Webservice EC2 server.
But I am limited to edit systemd folder and other folders such as init.d
Is there anything wrong?
Sorry I don't really understand EC2's OS, it seems a lot of methods are not working on it.
What I usually do via ssh to start my server is:
python hello.py
Can anyone tell me how to run this file automatically every time system reboots?
It depends on your linux OS but you are on the right track (init.d). This is exactly where you'd want to run arbitrary shell scripts on start up.
Here is a great HOWTO and explanation:
https://www.tldp.org/HOWTO/HighQuality-Apps-HOWTO/boot.html
and another stack overflow specific to running a python script:
Run Python script at startup in Ubuntu
if you want to share you linux OS I can be more specific.
EDIT: This may help, looks like they have some sort of launch wizard:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances using the command line tools), or as
base64-encoded text (for API calls).
I have got a Windows7 system, and I installed on it a Virtual Box 5.1.26.
On this virtual box, I installed a Debian64 - Linux server. (I think I configured it correctly, it is getting enough memory).
When I want run a Python script on it (which is a web-scraping script, it process around 1000 pages and take it into database), i get always the same error message after a few minutes :
Unable to allocate and lock memory. The virtual machine will be paused. Please close applications to free up memory or close the VM.
Or something error message with : run out of time (when it want to load a website)
In the windows7 system my script is working without any problem, so I am a little bit confused now, what is the problem here?
First check the parameters of your virtual machine you might have given it much more RAM or processors than you have (or not enough).
If this is not the case close everything in the VM and only start the script.
These errors generally say that you don't have resources to perform the operation.
Check if your syntax is ok and if you are using the same version of python on both systems.
Note that the VM is a guest system and can't have as much resources as your main OS because the main Os will die in some circumstances.
I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible?
As mentioned by Jon and Vincent, it's better to run the code from a cloud service. But either way, I think what you're looking for is what to put into the terminal to run the code even after you close the terminal. This is what worked for me:
nohup python code.py &
You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance
We have a server running Windows 7 Pro. I have several Python script I'd like to save to the server and have it so that client computers can run them by simply double-clicking. The client computers are all running OSX. This is proving to be... problematic.
First I tried to simply make the Python scripts executable, but this doesn't seem to be possible on a Windows server -- since you can't set the 'executable' flag, double-clicking on a file will always open it in an editor (unless I were to go to every single computer and make .py files open with Python). Trying to create a shell script has the same problem -- there's no way to make them executable from the server.
My solution was to just make a simple AppleScript app that sends a command to launch the script. Unfortunately, as soon as I copy the app to the server, it stops working. It seems that OSX apps refuse to execute properly when saved to the server -- if you run the file, nothing happens at all.
Is there a simple solution I'm overlooking?
This is probably what you're looking for: http://oreilly.com/catalog/samba/chapter/book/ch05_03.html says that Samba clients (that OS X uses to connect to Windows shares) can map archive/hidden/system file attributes to owner/group/world executable bits respectively.
Try setting those attributes on the script file and make sure its first line is #!/usr/bin/python. If this mapping is enabled by default, the script will run by double-click.
actually the issue is that windows has no equivalent of the execute bit for files.
the solution is to change the mount options on the share so that all the files have their execute bit set.