Is there a way to specify what file will be put to host when using Fabric?
I have a list of hosts with platform specified, such as:
host_list = [['1.1.1.1', 'centos5_x64'], ['2.2.2.2','centos6_x86']]
I want to write something like:
from fabric.api import env, execute
env.hosts = [x[0] for x in hosts_list]
def copy()
put('some_rpm' + <platform>)
execute(copy)
So how can I specify the platform string for each host in env.hosts?
All other steps in my Fabric-based install & test script are equal for each host,
so the obvious answer is to write a 'threaded_copy()' that will do the job. But still, a Fabric-based solution should be much clearer...
As always, I've found the answer myself a while after posting the question here :)
def copy()
platform_string = get_platform_for_host(env.host)
put('some_rpm' + platform_string)
The env.host variable will hold the current host the function is executing on (tested... works).
Related
I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!
I am trying to learn how to send a list of lists in Python to R -script which runs statistical methods and gives two or three data frames back to Python
I stumbled across the pyRserve package. I was able to follow the manual in their documentation and everything works great in command line (>>> ). When I run a script, it does not stop. I have installed Rserve package and started its service in RStudio. Below is the code:
import pyRserve
print "here1" #prints this line...
conn = pyRserve.connect(host='localhost', port=6311)
print "here2"
a= conn.eval('3+5')
print a
Can anyone please help?
The (docs) suggest:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect()
And then go on with:
To connect to a different location host and port can be specified explicitly:
pyRserve.connect(host='localhost', port=6311)
This is not meant to indicate that both lines should be run. The second line should be viewed as a potential modifier for the first. So if you need an alternate address or port, then it should look like:
$ python
>>> import pyRserve
>>> conn = pyRserve.connect(host='localhost', port=6311)
Also note this caveat for windows users:
Note On some windows versions it might be necessary to always provide ‘localhost’ for connecting to a locally running Rserve instance.
I've tried really hard to find this but no luck - I'm sure it's possible I just can't find and example or figure out the syntax for myself
I want to use fabric as a library
I want 2 sets of hosts
I want to reuse the same functions for these different sets of hosts (and so cannot us the #roles decorator on said functions)
So I think I need:
from fabric.api import execute, run, env
NODES = ['192.168.56.141','192.168.56.152']
env.roledefs = {'head':['localhost'], 'nodes':NODES}
env.user('r4space')
def testfunc ():
run('touch ./fred.txt')
execute(testfunc(),<somehow specific 'head' or 'nodes' as my hosts list and user >)
I've tried a whole range of syntax // hosts=NODES, -H NODES, user='r4space'....much more but I either get a syntax error or "host_string = raw_input("No hosts found. Please specify (single)""
If it makes a difference, ultimately my function defs would be in a separate file that I import into main where hosts etc are defined and execute is called.
Thanks for any help!
You have some errors in your code.
env.user('r4space') is wrong. Should be env.user = 'r4space'
When you use execute, the first parameter should be a callable. You have used the return value of the function testfunc.
I think if you fix the last line, it will work:
execute(testfunc, hosts = NODES)
I have the following code for a GUI that I am developing:
sftp = self.ssh.open_sftp()
try:
localpath="/Users/..../signals.txt"
remotepath = "/data1/.../sd_inputs.txt"
sftp.put(localpath, remotepath)
The 'localpath' is my laptop but since this is a GUI and I am developing it for users who have their own laptops/computers, is there a command in Paramiko that lets me avoid or circumvent the localpath specification, in much the same way that os.system does for python?
From your example
os.system("SD_%s.xls" % (self.input2.GetValue()))
there is nothing special about os.system here. You call self.input2.GetValue() to format a string that you pass to os.system. You can do something similar with paramiko except you have to deal with the problem that the local and remote paths are different. Assuming you have a GUI form that gives both pieces of information, it will look something like:
sftp.put(self.localpath.GetValue(), self.remotepath.GetValue())
Actually I just discovered that I don't have to find a command in Paramiko that can circumvent or avoid the local path. Rather Python can find the local path as follows:
import os
path = os.getcwd()
localpath = path + "/signals.txt"
print localpath
I just have to put this stuff in the try block.
i am new to python and fabric. we currently use capistrano and have a setup similar to this:
/api-b2b
- Capfile (with generic deployment/setup info)
/int - target host config (like ip, access etc.)
/prod - target host config (like ip, access etc.)
/dev - target host config (like ip, access etc.)
/api-b2c
/int
/prod
/dev
/application1
/int
/prod
/dev
/application2
/int
/prod
/dev
we are not happy with capistrano for handling our java apps - fabric looks like a better (simpler) alternative.
all the example fabfiles i have seen so far are "relatively simple" in that they only handle one application for different hosts. i'd like to see some code where different app/hosts are handled by the same fabric files/infrastructure (like inheritance etc.) to share the same logic for common tasks like git handling, directory creation, symlinks etc.. i hope you get what i mean. i want the whole logic to be the same, just the apps config is different (git repo, target directory). all the rest is the same accross the apps (same server layout...)
i want to be able to enter something like this
$ cd api-b2b
$ fab env_prod deploy
$ cd api-b2c
$ fab env_prod deploy
or
$ fab env_prod deploy:app=api=b2b
$ fab env_prod deploy:app=api=b2c
any help (and pointers to sample files) highly appreciated
cheers
marcel
If you genuinely want reuse amongst your fabric code, the most robust approach is to refactor the commonalities out and make it a python module. There are modules like fabtools and cusine that are good examples of what it is possible to do.
If you're looking to have multiple projects, there are a few ways to achieve that result. Assuming you're using a fabfile directory (rather than a single fabfile.py), you'll have a structure like this.
/fabfile
__init__.py
b2b.py
b2c.py
Assuming that you have:
# b2b.py / b2c.py
from fabric.api import *
#task
def deploy():
# b2b/b2c logic
pass
When you run fab -l (with an empty __init__.py) you'll see:
Available commands:
b2b.deploy
b2c.deploy
To get closer to what you're looking for, you can dynamically lookup, from an argument, which deployment target you want to run:
# __init__.py
from fabric.api import *
import b2b
import b2c
#task
def deploy(api):
globals()[api].deploy()
Which means that on the command line, I can run fab deploy:api=b2b or fab deploy:api=b2c.
Edit 27th Jan
Specifying one or machines for a task to run on can be achieved on the command line with the -H or -R switches, using #task or #role decorators, or the settings in the fabric environment (env.hosts and env.roles). The fabric documentation has extensive examples on the execution model that shows you all the details.
One way to do this (and potentially not the best way depending on your application) is to dynamically alter the host lists based on the api and the target environment.
# __init__.py
from fabric.api import *
import b2b
import b2c
#task
def deploy(api, target='test'):
func = globals()[api].deploy
hosts = globals()[api].deploy_hosts(target)
execute(func, hosts=hosts)
And now the b2b.py and b2c.py files will look something like:
# b2b.py / b2c.py
#task
def deploy():
# b2b/b2c logic
pass
def deploy_hosts(target):
return {
'test' : ['localhost'],
'prod' : ['localhost'],
'int' : ['localhost'],
}[target]