Default values for boto3 method parameters - python

I would like to programmatically write a method that calls a boto3 method and change default parameters inside methods.
For example, I want to use my log bucket if log is set to True. Otherwise, don't log it. Something like this:
def my_run(log=False):
log_string = "s3://mylogs" if log else None
result = emr.run_job_flow(Name = 'EMRTest1',
LogUri = log_string ,
...
)
return result
So, for default value I used None. However, boto3 expects a string. I tried empty string for default and it's not a valid value
I know that if I don't specify logURI, it won't get stored. So I could do it with if statements, like this:
def my_run(log=False):
if log:
result = emr.run_job_flow(Name = 'EMRTest1',
LogUri = "s3://mylogs" ,
...
)
else:
result = emr.run_job_flow(Name = 'EMRTest1',
...
)
return result
But that's horrible way. And LogUri was just an example. I want to be able to change other parameters. I just can't make those nested ifs.
Is there a default value for various types like strings in boto3 that I can use?
Edit 1
From the first comment below,
http://boto3.readthedocs.org/en/latest/guide/events.html#provide-client-params
Interesting API. Not well documented though. Their example for s3 clisnt works fine,
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function
def add_my_bucket(params, **kwargs):
print "Hello"
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
# Register the function to an event
event_system.register('provide-client-params.s3.ListObjects', add_my_bucket)
response = s3.list_objects()
then response is good and also I see "Hello" printed.
But now I try to make an example for emr's run_job_flow:
def my_run(name):
def setName(params, **kwargs):
print "Hello"
params['Name'] = name
current_emr = boto3.client('emr')
event_system = current_emr.meta.events
event_system.register('provide-client-params.emr.RunJobFlow', setName)
current_emr.run_job_flow(...)
When I run this, I get:
Missing required parameter in input: "Name"
Am I using wrong syntax somehow?
I did use RunJobFlow which I got from current_emr.meta.method_to_api_mapping
Maybe it's not provide-client-params for emr?
I also don't see "Hello" printed

There are no default values except the service specified ones, in which case we just send nothing at all. If you want to change parameters at run-time you can hook into the events system. You can read up about that here. The examples there are pretty close to what I think you want to do.

Related

Check if sub-methods exist in one line (python)

I'm developing telegram bot on python (3.7).
I'm getting POST updates/messages from users (class telegram.update.Update)
if request.method == "POST":
update = Update.de_json(request.get_json(force=True), bot)
Then I'm storing all the parameters of the incoming "message" in variables.
For example, if there is a user status update, I save it this way:
if update and update.my_chat_member and update.my_chat_member.new_chat_member \
and update.my_chat_member.new_chat_member.status:
new_status = update.my_chat_member.new_chat_member.status
else:
new_status = None
It's important to me to save 'None' if this parameter is not provided.
And I need to save about 20 parameters of this "message". Repeating this construction for 20 times for each parameter seems to be inefficient.
I tried to create a checking function to reuse it and to check if parameters exist in one line for each parameter:
def check(f_input):
try:
f_output = f_input
except: # i understand it's too broad exception but as I learned it's fine for such cases
f_output = None
return f_output
new_status = check(update.my_chat_member.new_chat_member.status)
but , if a certain Update doesn't have update.my_chat_member, the error pops up already on the stage of calling the 'check' function
new_status = check(update.my_chat_member.new_chat_member.status)
>>> 'NoneType' object has no attribute 'new_chat_member'
and not inside the check().
Could someone guide me pls, if there is an intelligent way to save these 20 parameters (or save None), when the set of these parameters changes from message to message?
You're trying to invoke methods on a NoneType object before passing it to your check function. The update.my_chat_member object itself is NoneType and has no new_chat_member method. A way to check that could look like this.
if not isinstance(update.my_chat_member, type(None)):
*do what you need to a valid response*
else:
*do what you need to a NoneType response*
Also as #2e0byo noted the check function doesn't really do much. You're checking to see if you can assign a variable to "f_output" which outside of a few cases will always pass without exception. So the except part will almost never run and certainly not as intended. You need to try something that can cause an expected error. This is an example using your code.
def check(f_input):
try:
f_output = f_input.new_chat_member.status()
except AttributeError:
f_output = None
return f_ouput
new_status = check(update.my_chat_member)

How to print only once per execution in Python

I'm not a very experienced programmer, and I tried to find a library that contained a Python function that would print something only the first time this function was called from the same call and no more per script execution. Because I didn't find it, I created my own, and it works. But I don't know how to make it able to work just like print(f'') so I can include variables. Loops+break won't work here.
This is how it looks right now:
#dictionary so we don't have scope issues
already_printed = {}
def print_only_once(msg):
try:
testing = already_printed[msg]
except KeyError:
already_printed[msg] = msg
print(msg)
return
And I call it using:
print_only_once('my special message')
If this message has been printed before, it won't print again. I would like to be able to also do something like this:
print_only_once(f'temp is {var1} and humidity is {var2}')
And this special message would print only one time per script execution, regardless of how many times it was called, including with differente values for var1 and/or var2 from the first call.
Any help? Maybe there's something that does exactly that out there but I couldn't find it. Thanks a lot!
You can't do this when you format the string in the caller, because your function only receives the formatted string.
You need to have your function take the format string and values separately. Then you can save the format string in your dictionary, and call .format() explicitly.
already_printed = set()
def print_only_once(format_string, *args, **kwargs):
if format_string not in already_printed:
print(format_string.format(*args, **kwargs))
already_printed.add(format_string)
print_only_once('temp is {var1} and humidity is {var2}', var1=var1, var2=var2)
Try doing something like this:
already_printed = []
def print_only_once(msg_name, msg):
if msg_name not in already_printed:
already_printed.append(msg_name)
print(msg)
return
I added another parameter called "msg_name". You should write what the message is about in there (like giving a group name). And you write your actual text/message in "msg".

Python: Using string stored in variable to create attribute to class

I am trying to follow syntax of the pyparticleio.ParticleCloud package. Using the following command, my code works correctly "particle_cloud.boron1.led('on')" (hardcoded values)
I want to pass portions of the command, "boron1" and "on" as variable. I'm trying to figure out how to use those variables to act in the same way as if i'd hardcoded the values.
My python level is very beginner.
command_list['boron1','on']
device = command_list[0]
function_1 = command_list[1]
access_token = "ak3bidl3xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
particle_cloud = ParticleCloud(username_or_access_token=access_token)
#particle_cloud.boron1.led('on') #hardcoded example that works
particle_cloud.device.led(function_1) #what i would like to work
If you set device to the actual object, you can call methods on the object. Example:
device = particle_cloud.boron1 # Or whatever you like
arg = 'on' # Also make this whatever you like
device.led(arg) # Same as: particle_cloud.boron1.led('on')
Python has a built in function called exec
It allows you to take a string, and have Python execute it as code.
A basic example based on the code you provided would look like this:
command_list['boron1','on']
device = command_list[0]
function_1 = command_list[1]
exec('particle_cloud.' + device + '.led("' + function_1 + '")')
This is a bit ugly, but there are different ways to compose strings in Python such as using join or format so depending on your real code you may be able to build something nice.
Just be careful not to pass raw user input to exec!
I can cause all kinds of trouble from errors to security issues.
I believe you could use getattr() (in Python3: https://docs.python.org/3/library/functions.html#getattr ) :
pcdevice = getattr(particle_cloud, device)
pcdevice.led(function_1)
(BTW, I woudln't name the string 'on' with the label 'function_1' as the variable name implies that this option is a function when it is a string. Also, the above may now work depending on the properties of your library object ParticleCloud.)

How to pass a Troposphere object attribute conditionally?

I'm using Troposphere to build CloudFormation stacks and would like to pass the Elastic Load Balancer ConnectionSettings attribute only if it's set in my configuration, otherwise I don't want to specify it.
If I set it to default None then I get an error about the value not being of the expected type of troposphere.elasticloadbalancing.ConnectionSettings.
I'd rather try to avoid setting an explicit default in the call because it might override other settings.
Idealy, I would like to be able to add attributes to an existing object, e.g.:
lb = template.add_resource(elb.LoadBalancer(
...
))
if condition:
lb.add_attribute(ConnectionSettings = elb.ConnectionSettings(
...
))
Is there a way to achieve that?
UPDATE: I achieved it using a hidden Troposphere method, which works but I'm not happy with:
if condition:
lb.__setattr__('ConnectionSettings', elb.ConnectionSettings(
....
))
I'm still interested in a solution which doesn't involve using a private method from outside the module.
The main README eludes to just using the attribute names like this:
from troposphere import Template
import troposphere.elasticloadbalancing as elb
template = Template()
webelb = elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
)
if True:
webelb.ConnectionSettings = elb.ConnectionSettings(IdleTimeout=30)
elasticLB = template.add_resource(webelb)
print(template.to_json())
So the big question is - where does the configuration for ConnectionSettings come from? Inside Cloudformation itself (and troposphere) there are Conditions, Parameters, and the AWS::NoValue Ref. I use that fairly heavily in the stacker RDS templates:
Here's the Parameter: https://github.com/remind101/stacker/blob/master/stacker/blueprints/rds/base.py#L126
Here's the condition: https://github.com/remind101/stacker/blob/master/stacker/blueprints/rds/base.py#L243
And here's how it's used in a resource later, optionally - if the StorageType Parameter is blank, we use AWS::NoValue, which is a pseudo Ref for not actually setting something: (Sorry, can't post more than 2 links - go to line 304 in the same file to see what I'm talking about)
If you're not using Parameters however, and instead doing all your conditions in python, you could do something similar. Something like:
connection_setting = condition and <actual connection setting code> or Ref("AWS::NoValue")
The other option is to do it entirely in python, which is basically your example. Hopefully that helps, there are a lot of ways to deal with this, including creating two different ELB objects (one with connection settings, one without) and then picking either one with either python code (if condtion) or cloudformation conditions.
If the value is known in Python (i.e., it does not originate from a CloudFormation Parameter), you can use a dictionary to add optional attributes to resources in a Troposphere template:
from troposphere import Template
import troposphere.elasticloadbalancing as elb
template = Template()
my_idle_timeout = 30 # replace this with your method for determining the value
my_elb_params = {}
if my_idle_timeout is not None:
my_elb_params['ConnectionSettings'] = elb.ConnectionSettings(
IdleTimeout=my_idle_timeout,
)
my_elb = template.add_resource(elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
**my_elb_params,
))
print(template.to_json())
If the value originates from a CloudFormation Parameter, you need to create a Condition to test the value of the parameter and use Ref("AWS::NoValue") if no value was provided for the parameter, e.g.:
from troposphere import Template, Parameter, Equals, Ref, If
import troposphere.elasticloadbalancing as elb
template = Template()
my_idle_timeout = template.add_parameter(Parameter(
"ElbIdleTimeout",
Description="Idle timeout for the Elastic Load Balancer",
Type="Number",
))
no_idle_timeout = "NoIdleTimeout"
template.add_condition(
no_idle_timeout,
Equals(Ref(my_idle_timeout), ""),
)
my_elb = template.add_resource(elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
ConnectionSettings=If(
no_idle_timeout,
Ref("AWS::NoValue"),
elb.ConnectionSettings(
IdleTimeout=Ref(my_idle_timeout),
),
),
))
print(template.to_json())

How do I use beaker caching in Pyramid?

I have the following in my ini file:
cache.regions = default_term, second, short_term, long_term
cache.type = memory
cache.second.expire = 1
cache.short_term.expire = 60
cache.default_term.expire = 300
cache.long_term.expire = 3600
And this in my __init__.py:
from pyramid_beaker import set_cache_regions_from_settings
set_cache_regions_from_settings(settings)
However, I'm not sure how to perform the actual caching in my views/handlers. Is there a decorator available? I figured there would be something in the response API but only cache_control is available - which instructs the user to cache the data. Not cache it server-side.
Any ideas?
My mistake was to call decorator function #cache_region on a view-callable. I got no error reports but there were no actual caching. So, in my views.py I was trying like:
#cache_region('long_term')
def photos_view(request):
#just an example of a costly call from Google Picasa
gd_client = gdata.photos.service.PhotosService()
photos = gd_client.GetFeed('...')
return {
'photos': photos.entry
}
No errors and no caching. Also your view-callable will start to require another parameter! But this works:
#make a separate function and cache it
#cache_region('long_term')
def get_photos():
gd_client = gdata.photos.service.PhotosService()
photos = gd_client.GetFeed('...')
return photos.entry
And then in view-callable just:
def photos_view(request):
return {
'photos': get_photos()
}
The same way it works for #cache.cache etc.
Summary: do not try to cache view-callables.
PS. I still have a slight suspiction that view callables can be cached :)
UPD.: As hlv later explains, when you cache a view-callabe, the cache actually is never hit, because #cache_region uses callable's request param as the cache id. And request is unique for every request.
btw.. the reason it didnt work for you when calling view_callable(request) is,
that the function parameters get pickled into a cache-key for later lookup in the cache.
since "self" and "request" change for every request, the return values ARE indeed cached, but can never be looked up again. instead your cache gets bloated with lots of useless keys.
i cache parts of my view-functions by defining a new function inside the view-callable
like
def view_callable(self, context, request):
#cache_region('long_term', 'some-unique-key-for-this-call_%s' % (request.params['some-specific-id']))
def func_to_cache():
# do something expensive with request.db for example
return something
return func_to_cache()
it SEEMS to work nicely so far..
cheers
You should use cache region:
from beaker.cache import cache_region
#cache_region('default_term')
def your_func():
...
A hint for those using #cache_region on functions but not having their results cached - make sure the parameters of the function are scalar.
Example A (doesn't cache):
#cache_region('hour')
def get_addresses(person):
return Session.query(Address).filter(Address.person_id == person.id).all()
get_addresses(Session.query(Person).first())
Example B (does cache):
#cache_region('hour')
def get_addresses(person):
return Session.query(Address).filter(Address.person_id == person).all()
get_addresses(Session.query(Person).first().id)
The reason is that the function parameters are used as the cache key - something like get_addresses_123. If an object is passed this key can't be made.
Same problem here, you can perform caching using default parameters with
from beaker.cache import CacheManager
and then decorators like
#cache.cache('get_my_profile', expire=60)
lik in http://beaker.groovie.org/caching.html, but I can't find the solution how to make it work with pyramid .ini configuration.

Categories

Resources