I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.
Related
I have end-to-end tests written in pytest, running on a Kubernetes cluster in Namespace foo. Now I want to add simple chaos engineering to the tests to check the resilience of my services. For this, I only need to delete specific pods within foo -- since K8s restarts the corresponding service, this simulates the service being temporarily unreachable.
What is a simple way to delete a specific pod in the current namespace with Python?
What I have tried so far:
Since I did not find a suitable example in https://github.com/kubernetes-client/python/tree/master/examples, but the ones using pods looked quite complex,
I looked into kubetest, which seems very simple and elegant.
I wanted to use the kube fixture and then do something like this:
pods = kube.get_pods()
for pod in pods:
if can_be_temporarily_unreachable(pod):
kube.delete(pod)
I thought calling pytest with parameter --in-cluster would tell kubetest to use the current cluster setup and not create new K8s resources.
However, kubetest wants to create a new namespace for each test case that uses the kube fixture, which I do not want.
Is there a way to tell kubetest not to create new namespaces but do everything in the current namespace?
Though kubetest looks very simple and elegant otherwise, I am happy to use another solution, too.
A simple solution that requires little time and maintenance and does not complicate (reading) the tests would be awesome.
you can use delete_namespaced_pod(from CoreV1Api) to delete specific pods within a namespace.
Here is an example:
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_incluster_config() # or config.load_kube_config()
configuration = client.Configuration()
with client.ApiClient(configuration) as api_client:
api_instance = client.CoreV1Api(api_client)
namespace = 'kube-system' # str | see #Max Lobur's answer on how to get this
name = 'kindnet-mpkvf' # str | Pod name, e.g. via api_instance.list_namespaced_pod(namespace)
try:
api_response = api_instance.delete_namespaced_pod(name, namespace)
print(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->delete_namespaced_pod: %s\n" % e)
A continuation to an existing answer by Tibebes. M
How do I find out the namespace that the code itself is running in?
There's two ways to do this:
Use Downward API to pass pod namespace to a pod environment variable:
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api See metadata.namespace
Use your template engine / helm to additionally pass environment variable namespace to a pod. Example:
env:
- name: CURRENT_NAMESPACE
value: "{{ .Values.namespace }}"
I'm building a webapplication with the Flask framework of python. On the server I would like to preserve some state. I think the following code example makes my goal clear (and was also my initial idea):
name = ""
#app.route('/<input_name>')
def home(input_name):
global name
name = input_name
return "name set"
#app.route('/getname')
def getname():
global name
return name
Though when I deployed my website the response for an /getname request behaves inconsistent because there are multiple thread-instances of the code (I could be wrong). I have some plausible solutions to overcome this problem but I wonder if there would be a more 'clean' solution:
Solution 1: read and write the name from a database (a database seems like overkill if a only want to store 1 variable)
Solution 2: store the value of name in a file and setup a locking mechanism so that only one process/thread could write to the file at the same moment.
Goal: when client 'A' requests www.website.com/sven and after that client 'B' requests www.website.com/getname I want the response for client B to be 'sven'
Any suggestions?
Your example should not be done with global state, it will not work for the reason you mentioned - requests might land into different processes that will have different global values.
You can handle storing global variables by using a key-value cache, such as memcached or Redis, or file-system based cache - check Flask-Chaching package and particular docs https://flask-caching.readthedocs.io/en/latest/#built-in-cache-backends
I have a python application which should run remotely via an AWS pipeline and use secrets to get parameters such as database credentials. When running the application locally the parameters are loaded from a parameters.json file. My problem is how to to test I run remotely (so replacing IN_CLOUD_TEST):
import boto3
from json import load
if [IN_CLOUD_TEST]:
params_raw = boto3.client('ssm').get_parameters_by_path(Path='/', Recursive=True)['Parameters']
params = format_params(params)
else:
with open('parameters.txt') as json_file:
params = load(json_file)
I could of course use a try/except, but there must be something nicer.
You could check using AWS APIs, but a simpler alternative (and one that doesn't require making HTTP calls, helping you shave off some latency) is to set an environment variable on your remote server that tells it it's the production server and read it from the code.
import boto3
from json import load
from os import getenv
if getenv('IS_REMOTE', False):
params_raw = boto3.client('ssm').get_parameters_by_path(Path='/', Recursive=True)['Parameters']
params = format_params(params)
else:
with open('parameters.txt') as json_file:
params = load(json_file)
You could also apply the same logic but defining a variable that equals true when your server is supposed to be the testing one, and setting it on your local testing machine.
I have a simple SQL file that I'd like to read and execute using a Python Script Snap in SnapLogic. I created an expression library file to reference the Redshift account and have included it as a parameter in the pipeline.
I have the code below from another post. Is there a way to reference the pipeline parameter to connect to the Redshift database, read the uploaded SQL file and execute the commands?
fd = open('shared/PythonExecuteTest.sql', 'r')
sqlFile = fd.read()
fd.close()
sqlCommands = sqlFile.split(';')
for command in sqlCommands:
try:
c.execute(command)
except OperationalError, msg:
print "Command skipped: ", msg
You can access pipeline parameters in scripts using $_.
Let's say, you have a pipeline parameter executionId. Then to access it in the script you can do $_executionId.
Following is a test pipeline.
With the following pipeline parameter.
Following is the test data.
Following is the script
# Import the interface required by the Script snap.
from com.snaplogic.scripting.language import ScriptHook
import java.util
class TransformScript(ScriptHook):
def __init__(self, input, output, error, log):
self.input = input
self.output = output
self.error = error
self.log = log
# The "execute()" method is called once when the pipeline is started
# and allowed to process its inputs or just send data to its outputs.
def execute(self):
self.log.info("Executing Transform script")
while self.input.hasNext():
try:
# Read the next document, wrap it in a map and write out the wrapper
in_doc = self.input.next()
wrapper = java.util.HashMap()
wrapper['output'] = in_doc
wrapper['output']['executionId'] = $_executionId
self.output.write(in_doc, wrapper)
except Exception as e:
errWrapper = {
'errMsg' : str(e.args)
}
self.log.error("Error in python script")
self.error.write(errWrapper)
self.log.info("Finished executing the Transform script")
# The Script Snap will look for a ScriptHook object in the "hook"
# variable. The snap will then call the hook's "execute" method.
hook = TransformScript(input, output, error, log)
Output:
Here, you can see that the executionId was read from the pipeline parameters.
Note: Accessing pipeline parameters from scripts is a valid scenario but accessing other external systems from the script is complicated (because you would need to load the required libraries) and not recommended. Use the snaps provided by SnapLogic to access external systems. Also, if you want to use other libraries inside scripts, try sticking to Javascript instead of going to python because there are a lot of open source CDNs that you can use in your scripts.
Also, you can't access any configured expression library directly from the script. If you need some logic in the script, you would keep it in the script and not somewhere else. And, there is no point in accessing account names in the script (or mappers) because, even if you know the account name, you can't use the credentials/configurations stored in that account directly; that is handled by SnapLogic. Use the provided snaps and mappers as much as possible.
Update #1
You can't access the account directly. Accounts are managed and used internally by the snaps. You can only create and set accounts through the accounts tab of the relevant snap.
Avoid using script snap as much as possible; especially, if you can do the same thing using normal snaps.
Update #2
The simplest solution to this requirement would be as follows.
Read the file using a file reader
Split based on ;
Execute each SQL command using the Generic JDBC Execute Snap
I am trying to make a simple command line client for accessing shares via the Python bindings of gio (yes, the main requirement is to use gio).
I can see that comparing with it's predecessor gnome-vfs, it provides some means to do authentication stuff (subclassing MountOperation), and even some methods which are quite specific to samba shares, like set_domain().
But I'm stuck with this code:
import gio
fh = gio.File("smb://server_name/")
If that server needs authentication, I suppose that a call to fh.mount_enclosing_volume() is needed, as this methods takes a MountOperation as a parameter. The problem is that calling this methods does nothing, and the logical fh.enumerate_children() (to list the available shares) that comes next fails.
Anybody could provide a working example of how this would be done with gio ?
The following appears to be the minimum code needed to mount a volume:
def mount(f):
op = gio.MountOperation()
op.connect('ask-password', ask_password_cb)
f.mount_enclosing_volume(op, mount_done_cb)
def ask_password_cb(op, message, default_user, default_domain, flags):
op.set_username(USERNAME)
op.set_domain(DOMAIN)
op.set_password(PASSWORD)
op.reply(gio.MOUNT_OPERATION_HANDLED)
def mount_done_cb(obj, res):
obj.mount_enclosing_volume_finish(res)
(Derived from gvfs-mount.)
In addition, you may need a glib.MainLoop running because GIO mount functions are asynchronous. See the gvfs-mount source code for details.