Salt-key command in runner - python

I'm writing a custom Salt Stack runner to wrap accepting and rejecting minions. How do I call salt-key from my python runner that's equivalent to this from command line
salt-key -a {minion_name}

I can't provide you a definite answer, but here are my two cents:
The source code of the salt-key script is this one. Following the call chain, I reached this module which contains several classes to do key processing.
The module's documentation reads:
The Salt Key backend API and interface used by the CLI. The Key class can be
used to manage salt keys directly without interfacing with the CLI.
This is the mentioned class.
Based on this code, I presume it's used like:
import salt.client
import salt.key
client = salt.client.LocalClient()
key_manager = salt.key.Key(client.opts)
key_manager.accept('web*')

I know it has been a while since this question is answered, I would like to add my two cents on this subject.
In order to do key interactions programmatically, we use Wheel in salt. Usage is rather simple and clear:
from salt import config
from salt import wheel
masterOpts = config.master_config('/etc/salt/master')
wheelClient = wheel.WheelClient(masterOpts)
wheelClient.cmd('key.accept', ['minionId1'])
Bunch of other operations could be found in SaltStack documentatation here

Related

Using Jenkins variables/parameters in Python Script with os.path.join

I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!

Accessing Robot Framework global variables from a prerun modifier

I am invoking Robot Framework on a folder with a command like following:
robot --name MyTestSuite --variablefile lib/global_variables.py --variable TARGET_TYPE:FOO --variable IMAGE_TYPE:BAR --prerunmodifier MyCustomModifier.py ./tests
MyCustomModifier.py contains a simple SuiteVisitor class, which includes/excludes tags and does a few other things based on some of the variable values set.
How do I access TARGET_TYPE and IMAGE_TYPE in that class? The method shown here does not work, because I want access to the variables before tests start executing, and therefore I get a RobotNotRunningError with message Cannot access execution context.
After finding this issue report, I tried to downgrade to version 2.9.1 but nothing changed.
None of public API's seem to provide this information but debugging the main code does provide an alternative way of obtaining it. It has to be said that this example code will work with version 3.0.2, but may not work in the future as these are internal functions subject to change. That said, I do think that the approach will remain.
As Robot Framework is an application, it obtains the command line arguments through it's main function: run_cli (when running from command line). This function is filled with the arguments from the system itself and can be obtained throughout every python script via:
import sys
cli_args = sys.argv[1:]
Robot Framework has a function that interprets the commandline argument list and make it into a more readable object:
from robot.run import RobotFramework
import sys
options, arguments = RobotFramework().parse_arguments(sys.argv[1:])
The argument variable is a list where all the variables from the command line are added. An example:
arguments[0] = IMAGE_TYPE:BAR
This should allow you to access the information you need.

How to unit test program interacting with block devices

I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))

Using Python Boto with AWS Support API

I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.

Accessing samba shares with gio in python

I am trying to make a simple command line client for accessing shares via the Python bindings of gio (yes, the main requirement is to use gio).
I can see that comparing with it's predecessor gnome-vfs, it provides some means to do authentication stuff (subclassing MountOperation), and even some methods which are quite specific to samba shares, like set_domain().
But I'm stuck with this code:
import gio
fh = gio.File("smb://server_name/")
If that server needs authentication, I suppose that a call to fh.mount_enclosing_volume() is needed, as this methods takes a MountOperation as a parameter. The problem is that calling this methods does nothing, and the logical fh.enumerate_children() (to list the available shares) that comes next fails.
Anybody could provide a working example of how this would be done with gio ?
The following appears to be the minimum code needed to mount a volume:
def mount(f):
op = gio.MountOperation()
op.connect('ask-password', ask_password_cb)
f.mount_enclosing_volume(op, mount_done_cb)
def ask_password_cb(op, message, default_user, default_domain, flags):
op.set_username(USERNAME)
op.set_domain(DOMAIN)
op.set_password(PASSWORD)
op.reply(gio.MOUNT_OPERATION_HANDLED)
def mount_done_cb(obj, res):
obj.mount_enclosing_volume_finish(res)
(Derived from gvfs-mount.)
In addition, you may need a glib.MainLoop running because GIO mount functions are asynchronous. See the gvfs-mount source code for details.

Categories

Resources