Python script keeps running when using pyRserve - python

I am trying to learn how to send a list of lists in Python to R -script which runs statistical methods and gives two or three data frames back to Python
I stumbled across the pyRserve package. I was able to follow the manual in their documentation and everything works great in command line (>>> ). When I run a script, it does not stop. I have installed Rserve package and started its service in RStudio. Below is the code:
import pyRserve
print "here1" #prints this line...
conn = pyRserve.connect(host='localhost', port=6311)
print "here2"
a= conn.eval('3+5')
print a
Can anyone please help?

The (docs) suggest:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect()
And then go on with:
To connect to a different location host and port can be specified explicitly:
pyRserve.connect(host='localhost', port=6311)
This is not meant to indicate that both lines should be run. The second line should be viewed as a potential modifier for the first. So if you need an alternate address or port, then it should look like:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect(host='localhost', port=6311)
Also note this caveat for windows users:
Note On some windows versions it might be necessary to always provide ‘localhost’ for connecting to a locally running Rserve instance.

Related

Using Jenkins variables/parameters in Python Script with os.path.join

I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!

Python parallel-ssh run_command does not timeout when using pssh.clients

I have the following code:
from pssh.clients import ParallelSSHClient
#-------------------------------------------------
hosts = [ IP1, IP2, ... IPn]
host_config = { dict containing userid & passwd for each host }
clients = ParallelSSHClient(hosts, host_config=host_config,
num_retries=1, timeout=3)
output = clients.run_command("ls", stop_on_errors=False, timeout=3)
print output
If my hosts have all valid IPs, then I get "output". However, if even one of the IPs is invalid (non-existing host), then the run_command hangs forever. Even tried using "use_pty=True" argument to run_command.
Strange thing is that if I use the deprecated method pssh_client instead of clients like follows:
from pssh.pssh_client import ParallelSSHClient
it times-out as expected. Either there is a bug introduced with the new way of importing or there is some new way to specify timeout properly. I would prefer to use the recommended way instead of the deprecated way. But recommended way is not working for me. Anyone know if I am doing something wrong here ?
This has been confirmed as a bug on the ParallelSSH's github site. Hence, this issue is closed. Issue was listed at:
https://github.com/ParallelSSH/parallel-ssh/issues/133

PopenSpawn and ftp command

question - i am trying to make minimal example with latest pexpect,
but cant get to working basic example(src is below) in following
enviroment:
windows 10, 64bit python 3.4
command is ftp localhost ( server is filezila, running on localhost)
i am aware windows support in pexpect, is marked as experimental, but still, it would be usefull to get it working ..
problem:
if in code below i use
co.expect("",timeout=30)
, then scripts works, because it is not waiting for prompt after login... but as i need to interact and put more complex query, i need to use
co.expect("ftp>",timeout=30)
but at that moment, pexpect wait till timeout.. i found, in popen_spawn.py, nothing is coming. is it possible self.proc.stdout.fileno() is buffering, and waiting indefinetly, till buffer is filled ?
import pexpect
from pexpect import popen_spawn
import sys
try:
hostname = "127.0.0.1"
co = pexpect.popen_spawn.PopenSpawn('ftp localhost',encoding="utf-8")
co.logfile = sys.stdout
co.timeout = 4
co.expect(":")
co.sendline("test")
co.expect(".*word:.*")
co.sendline("test123")
co.expect("",timeout=30)
co.sendline('dir')
co.expect('ftp>')
co.close()
except Exception as e:
print(co)

Python Fabric: Trying to distribute different files to hosts

Is there a way to specify what file will be put to host when using Fabric?
I have a list of hosts with platform specified, such as:
host_list = [['1.1.1.1', 'centos5_x64'], ['2.2.2.2','centos6_x86']]
I want to write something like:
from fabric.api import env, execute
env.hosts = [x[0] for x in hosts_list]
def copy()
put('some_rpm' + <platform>)
execute(copy)
So how can I specify the platform string for each host in env.hosts?
All other steps in my Fabric-based install & test script are equal for each host,
so the obvious answer is to write a 'threaded_copy()' that will do the job. But still, a Fabric-based solution should be much clearer...
As always, I've found the answer myself a while after posting the question here :)
def copy()
platform_string = get_platform_for_host(env.host)
put('some_rpm' + platform_string)
The env.host variable will hold the current host the function is executing on (tested... works).

Subprocess statement works in python console but not work in Serverdensity plugin?

in the python console the following statement works perfectly fine (i guess using eval that way is not really good, but its just for testing purpose in this case and will be replaced with proper parsing)
$ python
>>> import subprocess
>>> r = subprocess.Popen(['/pathto/plugin1.rb'], stdout=subprocess.PIPE, close_fds=True).communicate()[0]
>>> data = eval(r)
>>> data
{'test': 1}
when i convert this into a Serverdensity plugin however it keeps crashing the agent.py daemon everytime it executes the plugin. i was able to narrow it down to the subprocess line but could not find out why. exception catching did not seem to work also.
here how the plugin looks like:
class plugin1:
def run(self):
r = subprocess.Popen(['/pathto/plugin1.rb'], stdout=subprocess.PIPE, close_fds=True).communicate()[0]
data = eval(r)
return data
I'm quite new to work with python and cant really figure out why this wont work. Thanks a lot for ideas :)
Do you have subprocess imported in the module? Also what error are you getting could you post the error message ?
After switching my dev box (maybe because of the different python version?) i finally was able to get some proper error output.
Then it was rather simple: I really just needed to import the missing subprocess module.
For who is interested in the solution:
http://github.com/maxigs/Serverdensity-Wrapper-Plugin/blob/master/ruby_plugin.py
Not quite production ready yet, but works already for save input

Categories

Resources