I recently discovered PyWinRM module which enables me, from a Linux machine, to establish a session with a 2008/2012 Server machine and then send DOS cmds, cmdlets and scripts to be executed remotely, collecting data and working with.
Example of PyWinRm usage:
# Establishing session
s = winrm.Session('server',auth=('user','password'),transport='ntlm')
# Script and run
script="""dir"""
r = s.run_cmd(script)
I have now, to find another solution, because my boss wants me to do the same, but with some 2000 and 2003 Servers machines (yeah, 2000 still exists for the moment in certain corporates).
I looked over the internet and the main reason, I think, is the absence of WinRM in earlier version of Win Server.
Which solutions would you suggest, provided that I can't install anything but python modules ?
Related
I know how to run a Python script made locally on a remote server and have seen a lot of questions in that regard. But I am in a situation where I cannot install python packages on the remote server I am accessing. Specifically, I need to use pypostal, which requires libpostal to be installed and I cannot do so. Moreover, I need pyspark to play with Hive tables.
Therefore, I need the script to run locally, where I can manage my packages and everything executes fine, but certain commands need to access the server in order to grab data. For example, using pyspark to get Hive tables on the server into a local dataframe. Essentially, I need all the Python to be executed using my local distribution with my local packages but perform its actions on the remote server.
I have looked into things like paramiko. But as far as I can workout, is just like an SSH client, which would use the Python distro on the remote server and not locally. Though, perhaps I don't understand how to use it properly.
I am running python 3.6 on Ubuntu 18.04 using WSL. The packages I am using are pandas, numpy, pyspark, and postal (subsequently libpostal).
TLDR;
Is it possible to run a script locally, have parts of it execute remotely but using my local Python? Or if there are other possible solutions, I would be grateful.
I am currently running Python 3.5 scripts on two VM instances on GCP from a local PyCharm session running on my Mac (see below for detailed environment specifications).
I have two different projects in GCP which look similar. I reviewed their setup with our cloud admin and we can't see any major difference, at least any trivial one. I created two Deep Learning images on GCP using the following cloud SDK command line, one within each project:
export PROJECT=[MY_PROJECT_NAME]
export INSTANCE_ROOT=$USER-vm
export ZONE=europe-west4-a
export IMAGE_FAMILY=tf-latest-gpu
export INSTANCE_TYPE=n1-highmem-8
export GPU_TYPE=v100
export GPU_COUNT=1
export INSTANCE_NAME=$INSTANCE_ROOT-$GPU_TYPE-$GPU_COUNT
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator=type=nvidia-tesla-$GPU_TYPE,count=$GPU_COUNT \
--machine-type=$INSTANCE_TYPE \
--boot-disk-size=200GB \
--metadata=install-nvidia-driver=True \
--scopes=storage-rw
Both images are completely similar.
I configured two remote ssh interpreters in PyCharm and deployed my Python code on both virtual machines. Everything is absolutely similar in terms of VM instance configuration (OS, Python version / libs, source code, etc.) and PyCharm remote interpreter configuration.
In both cases, the ssh ingress connection to the instance (on port 22) works pretty well.
Yet, when calling plt.show() to display images using matplotlib, the images get displayed in one setup but not in the other one.
This is not a matter of setting the proper ssh configuration (-X option on the command line, X11Forwarding, etc.). I already checked that, and anyway one of my VMs does a pretty good job of displaying my images within this configuration.
I debugged the execution and discovered that PyCharm automatically handles X display by implementing its own matplotlib FigureCanvas. When in remote ssh, the show() function actually opens a socket on the defined host (i.e. my local Mac) and sends the buffer to be displayed:
sock = socket.socket()
sock.connect((HOST, PORT))
[..]
sock.send(buffer)
This is precisely where my two configurations diverge:
The one working tries to connect on localhost:53725 and succeeds:
<socket.socket fd=28, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 42316), raddr=('127.0.0.1', 53725)>
The one failing tries to connect on localhost:53725 as well but gets an exception.
My strongest assumption is that some network configuration in the two GCP projects differs somehow and prevents the connection on localhost:53725 for the second one.
However, beyond that I have no idea what might happen and/or how to fix it.
Any idea / suggestion will be appreciated.
Thanks,
Laurent
--
Detailed environment specifications:
PyCharm 2018.2.4 (Professional Edition)
Build #PY-182.4505.26, built on September 19, 2018
Licensed to PyCharm Evaluator
Expiration date: October 27, 2018
JRE: 1.8.0_152-release-1248-b8 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
macOS 10.14
Ok. It seems to be a bug and I found a workaround.
I share it as it might save hours of troubleshooting and debugging to anyone stumbling on the same problem.
The problem actually occurs when you remain in the same PyCharm session and switch from one interpreter to the other one.
If you quit PyCharm and start it again, the local display will work with either of the interpreters / VMs you run first. Then, if you switch to the second one it fails.
Everything looks as if there were some kind of lock set on the port or anywhere else by PyCharm which prevents you from switching seamlessly from one interpreter to another.
I'll share these insights with the PyCharm support team. BTW, other than that, this local display feature with remote interpreters is awesome and works just fine.
So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)
(note I doubt this is specific to PyQt so I've tagged with Qt too)
We have a 2 test suites (call them A and B) that we run with pytest on our dev workstations:
python -m pytest -c configfile -s -v A
python -m pytest -c configfile -s -v B
Suite B (and only that one) tests our PyQt components; A doesn't have any PyQt in it. We defined project A in Jenkins (version 1.658 btw) to run Suite A: it runs without issue in Jenkins. We did the same, defined a project B in Jenkins to run Suite B: this one fails intermittently after many tests, with a SYSTEM log message and a WARNING log message from Qt (caught by setting a handler via QtCore.qInstallMessageHandler()). The Jenkins log that captures the test suite B's stdout is:
SYSTEM log message from Qt: WindowCreationData::create: CreateWindowEx failed (Not enough storage is available to process this command.)
WARNING log message from Qt: Failed to create platform window for QWidgetWindow(0x705d260, name="FramedPartWidgetWindow") with flags QFlags<Qt::WindowType>(Window|WindowTitleHint|WindowSystemMenuHint|WindowMinMaxButtonsHint|WindowCloseButtonHint|WindowFullscreenButtonHint) (context: category=default)
Build step 'Execute Windows batch command' marked build as failure
The last line is output by Jenkins script running test suite B.
On the Jenkins machine (a Windows 7 Pro 64-bit platform, btw) that runs the test suites, I can open a Windows command shell and if I run the test suites from there, both test suites run without issue. Then I open a web browser, go to the Jenkins project page for suite B and click "Build now": this runs the same thing I ran from command shell, but I get the above error. If I do 10 builds, the above will happen for a different test every time, although always in the same "area". If I filter out the tests in the vicinity of where the failure occurred, the test runs further, but after removing 4 test classes this way, this no longer helps.
The issue is not desktop, because I am logged in. One difference between the command shell run vs the Jenkins run is that from shell, test suite B opens many (PyQt) windows and closes them. From Jenkins, I can't see any windows open so they seem to be opening in some "virtual" desktop. So maybe the issue the desktop. Do I need to somehow configure that virtual desktop to have larger graphics capacity?
The error seems to indicate that the process started by Jenkins is running out of some resource, but it's not clear what: there is plenty of drive space and memory.
Anyone have any idea where to go from here? I did a google search and all I could find is these, they don't look too promising although I will try the suggestions:
Developer Central
Not enough storage is available to process this command
Win32Exception Not enough storage is available to process this command
I'm not familiar with the intricacies of how the Jenkins service on Windows runs processes, so I'm at a loss.
Update 20161219: Apparently this is a known issues with GUI testing from Windows services, see my post on Bitnami Jenkins forum.
Apparently this is a known issues with GUI testing from Windows services, see my post on Bitnami Jenkins forum, as is the case with the bitnami jenkins stack we use. As I mention in that post, the bottom of the page https://wiki.jenkins-ci.org/display/JENKINS/Tomcat says GUI testing in Windows is not likely to work when Jenkins is installed using Tomcat as a container installed as a service. The only option seems to be to setup Tomcat to run using the Windows Scheduler (instead of a service), but unfortunately the bitnami stack we use for jenkins does not seem to allow this, so the only solution for us is to install jenkins from scratch and tomcat as a scheduled task.
It appears that on Windows (based on docs for setting up Jenkins to test GUI via Squish),
Install Jenkins master (this should be doable via bitnami stack in a Linux VM)
Install Windows slave. Make sure to not start the slave as Windows Service at Launch method. Windows Services are intended to run command line applications but not to run applications which consist of a GUI. Starting the Jenkins slave as JNLP via Launch slave agents via Java Web Start works fine.
Setup a node inside Jenkins at Manage Jenkins|Manage Nodes|New Node.
Read https://kb.froglogic.com/display/KB/Automation+on+Windows
This page seems to aggregate several posts related to this issue.
I need to run a vbs script in Windows machine from a Linux machine. I used Pywinrm to achieve the connection between Windows and Linux. The vbs script has a set of installation in it. Is there anyway I can run this vbs script without having to use "CredSSP" authentication?
Yes, you can use kerberos in a double hop scenario initiated using pywinrm. A minor patch needs to be made to pywinrm in order to get forwardable kerberos tickets, which is outlined here:
https://github.com/diyan/pywinrm/issues/58
You will need to perform SPN registration as needed for the services you are connecting to on the second hop. You'll also need to configure the computer account of the first hop to be trusted for delegation to the desired services using kerberos.
Good luck!