I want to run a local batch file on a remote system with WMI, instead of running a batch file on the remote system. The reason behind this is that I'll need the directory to which I'm referencing in the batch file to be variable, and adjustable by the user. How exactly can I do this? Reference the local file and run it, or send it to the remote sytem? How do I code it either way?
The reason I chose WMI was because it has been extremely reliable thus far, and I have no intention of reusing PsExec, and I have no need for ssh.
Referencing the local file on the remote system is bad because the remote system user has to have access to the file system on your local system which is - viewed by the remote system - a remote system.
You would have to have a share on your local machine which is accessible from the remote system and modify the batch to access the UNC-Path (pushd).
The best way would be to copy the batch to the remote system (or create it dynamically on the remote system) and execute it from there.
How to create remote processes can be read at this question,
several ways to copy files to a remote system are described here
Related
I have a Python script inside a container that needs to continuously read changing values inside a file located on the host file system. Using a volume to mount the file won't work because that only captures a snapshot of the values in the file at that moment. I know it's possible since the node_exporter container is able to read files on the host filesystem using custom methods in Golang. Does anyone know a general method to accomplish this?
I have a Python script [...] that needs to continuously read changing values inside a file located on the host file system.
Just run it. Most Linux systems have Python preinstalled. You don't need Docker here. You can use tools like Python virtual environments if your application has Python library dependencies that need to be installed.
Is it possible to get a Docker container to read a file from host file system not using Volumes?
You need some kind of mount, perhaps a bind mount; docker run -v /host/path:/container/path image-name. Make sure to not overwrite the application's code in the image when you do this, since the mount will completely hide anything in the underlying image.
Without a bind mount, you can't access the host filesystem at all. This filesystem isolation is a key feature of Docker.
...the [Prometheus] node_exporter container...
Reading from the GitHub link in the question, "It's not recommended to deploy it as a Docker container because it requires access to the host system." The docker run example there uses a bind mount to access the entire host filesystem, circumventing Docker's filesystem isolation.
I followed below instructions to setup remote interpreter and remote deployment.
https://medium.com/#erikhallstrm/work-remotely-with-pycharm-tensorflow-and-ssh-c60564be862d
It says the below
Deployment
The remote interpreter can not execute a local file, PyCharm have to
copy your source files (your project) to a destination folder on your
remote server, but this will be done automatically and you don’t need
to think about it!
However, I even after setting up automatic deployment in Tools|Deployment|Automatic Upload(always). It is not working as expected and not syncing up files between remote and local automatically in a seamless manner.
Question
So how can I just skip deployment and sync between remote and local folder all together and just run/debug my local code using remote interpreter?
I mean on a local .py file i should be able to but a breakpoint and debug using remote interpreter? This makes life more easier and not worry about sync.
I'm connected to a VM on a private network at address 'abc.def.com' using ssh, and on that VM there's an application that hosts a Python web app (IPython Notebook) that I can access by pointing my local browser to 'abc.def.com:7777'.
From that web app I can call shell commands by preceding them with '!', for example !ls -lt will list the files in the VM current working directory. But since I'm using my own laptop's browser, I think I should be able to run shell commands on my local files as well. How would I do that?
If that's not possible, what Python/shell command can I run from within the web app to automatically get my laptop's IP address to use things like scp? I know how to get my IP address, but I'd like to create a program that will automatically enable scp for whoever uses it.
You have ssh access so you could possibly write a python function that would let you transfer files via scp the secure copy command which uses ssh to communicate. If you exchange keys with the server you wouldn't have to put in a password so I see no problem from that standpoint. The issue is if you have an address for your local machine to be accessed from the server.
I work on various remotes from my laptop all day and from my laptop to the sever I could have this function:
def scp_to_server(address, local_file, remote_file):
subprocess.call(['scp',local_file,"myusername#{}:{}".format(address, remote_file)])
that would copy a file from my local machine to the remote provided the paths were correct, I have permissions to copy the files, and my local machine's id_rsa.pub key is in the ~/.ssh/authorized_keys file on the remote.
I have no way to initiate a secure copy from the remote to my local machine however because I don't have an address to access the local machine from that I can "see" on the remote.
If I open the terminal on my laptop and run hostname I see mylaptop.local and on the remote I see remoteserver#where.i.work.edu but the first is a local address I can see it from other machines on my LAN at home, (because I have configured that) but I can't see mylaptop.local from the remote. I know there is a way to configure that so I could find my laptop at home from anywhere, but I never had the need to do that (since I bring the laptop with me) so I can't help you there. I think there are a few more hurdels to go-over than you would like.
You could implement the function above on your local machine and transfer the files that way though.
There is a directory on a server I have now access to except for nfs-mounts. I mount the directory via nfs into a local linux system. New files arrive in the directory and some older files may get updated via other processes on the server.
I'd like to write a Python script that kicks into action whenever such a file is created or changed. I know that it is possible to watch a local directory with Linux and Python using inotify (or dnotify in older Versions). However these do not seem to work for remotely mounted volumes.
What are my options or is there a solution already implemented?
You could try FAM.
FAM can provide an RPC service for monitoring remote files (like a mounted NFS file system).
I'm currently using python-inotify to monitor local directories for changes, and run scripts when they happen.
Now though, I need functionality to monitor a remote directory for changes. The remote directory will be either a git or svn repo, on a server I have root ssh access to. I know about git hooks, but they only get run on commit/push/rebase etc, not on generic changes.
Is there an existing python library I might be able to use for this? Or can I just open an ssh connection in Python and then carry on using python-inotify?
You need file system level access for inotify to work. So the easiest way if you have ssh is to just run the monitor script on the remote system.
You can then use something like Twisted to communicate the changes from one system to another over the network.