I have a lambda function like this :
my_executor = python_lambda.PythonFunction(
scope=self,
id="my-lambda-new",
runtime=_lambda.Runtime.PYTHON_3_8,
role=existing_role_for_lambda,
memory_size=512,
function_name="my-lambda-new",
description="This ",
entry="./logs/src/myfolder",
index='controller.py',
handler="lambda_handler",
timeout=core.Duration.minutes(5)
)
I have create api gate way like this :
my_api = _apigateway.LambdaRestApi(
scope=self,
id="my-api",
endpoint_configuration=_apigateway.EndpointConfiguration(
types=[_apigateway.EndpointType.EDGE]
),
handler=my_executor,
default_cors_preflight_options=shared_stack.cors_options,
deploy_options=api_stage,
proxy=True
)
and now In other file using lambda arn I am accessing lambda like this:
_lambda_arn = ssm.StringParameter.value_for_string_parameter(self, "my-executor-lambda-arn")
self.my_executor_lambda = _lambda.Function.from_function_arn(self, "my_executor",
_lambda_arn)
Now how I can extent it's api gateway . want to add new api end point in here
from CDK docs:
Import an existing RestApi that can be configured with additional Methods and Resources.
Take a look at fromRestApiAttributes
Related
I'm using Firebase Firestore in my Python project (with their official Python SDK) and having trouble performing count() aggregation. This funciton is supported according to their docs. However, they do not provide Python example ( they do in other parts of documentation ). I tried to play with it in Python console, tried something like this:
query = db.collection('videos').where('status', '==', 'pending')
query.count()
without any luck. So I'm wondering how is it possible to implement? Does Python SDK support this functionality?
Firebase Admin Python SDK doesn't support that query yet. You can still use the runAggregationQuery REST API meanwhile. The Google Cloud Firestore Python SDK has Aggregation result types available from v2.7.0+ so it should be available in Admin SDK soon.
Although the API for this purpose is not fully ready but it's coming along. If you don't wanna use the REST API as suggested by #Dharmaraj, you can do something like this for now:
from google.cloud.firestore_v1.services.firestore import FirestoreClient
from google.cloud.firestore_v1.types.document import Value
from google.cloud.firestore_v1.types.firestore import RunAggregationQueryRequest
from google.cloud.firestore_v1.types.query import (
StructuredAggregationQuery,
StructuredQuery,
)
Aggregation = StructuredAggregationQuery.Aggregation
CollectionSelector = StructuredQuery.CollectionSelector
Count = Aggregation.Count
FieldFilter = StructuredQuery.FieldFilter
FieldReference = StructuredQuery.FieldReference
Filter = StructuredQuery.Filter
Operator = StructuredQuery.FieldFilter.Operator
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id="videos")],
where=Filter(
field_filter=FieldFilter(
field=FieldReference(
field_path="status",
),
op=Operator.EQUAL,
value=Value(string_value="pending"),
)
),
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
print(next(stream).result.aggregate_fields["field_1"].integer_value)
Output:
1
Generally the following would work to count the total number of documents in a collection:
def count_documents(collection_id: str) -> int:
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id=collection_id)]
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
return next(stream).result.aggregate_fields["field_1"].integer_value
print(count_documents(collection_id="videos"))
Output:
10
Make sure that you have google-cloud-firestore>=2.7.3 and also remember to set the value of project_id variable in the count_documents function accordingly.
So, I can list my instances by zones using this API.
GET https://compute.googleapis.com/compute/v1/projects/{project}/zones/{zone}/instances.
I want now to filter my instances by region. Any idea how can I do this (using python)?
You can use aggregated_list(), to list all your instances on your project. Filtering via region could be done on the actual code. See code below where I used regex to mimic a filter using region variable.
from typing import Dict, Iterable
from google.cloud import compute_v1
import re
def list_all_instances(
project_id: str,
region: str
) -> Dict[str, Iterable[compute_v1.Instance]]:
instance_client = compute_v1.InstancesClient()
request = {
"project" : project_id,
}
agg_list = instance_client.aggregated_list(request=request)
all_instances = {}
print("Instances found:")
for zone, response in agg_list:
if response.instances:
if re.search(f"{region}*", zone):
all_instances[zone] = response.instances
print(f" {zone}:")
for instance in response.instances:
print(f" - {instance.name} ({instance.machine_type})")
return all_instances
list_all_instances(project_id="your-project-id",region="us-central1") #used us-central1 for testing
NOTE: Code above is from this code. I just modified it to apply the filtering above.
Actual instances on my GCP account:
Result from code above (only zones with prefix us-central1 were displayed):
Since the resources on AWS have been created by manual on console. e.g.
Rule, EventBus, APIDestination (Target). Means these resource doesn't provide any cdk code.
Point is I want to add more Rule with existing EventBus and APIDestination (Target)**. Then customize input_transformer in targets within cdk code.
from aws_cdk import aws_events as events, aws_events_targets as targets
class TheDestinedLambdaStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
new_rule = events.Rule(
self,
"rule",
event_pattern=events.EventPattern(),
event_bus=events.from_event_bus_arn(), # imported
targets=#APIDestination with params and transformer, dont know method ???
)
It's possible to implement this?
or anyone know which method of EventTarget able to import existed resource to cdk?
Docs:
https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_events/EventBus.html
The L1 CfnRule construct can create a new Rule targeting an existing API Destination and custom bus. It can also optionally apply input transforms:
events.CfnRule(
self,
"Rule",
event_bus_name="my-bus-name",
event_pattern={"source": ["cdk-test"]},
targets=[
events.CfnRule.TargetProperty(
arn="arn:aws:events:us-east-1:xxxx:api-destination/xxxxxxxx",
id="foo_rule",
role_arn="arn:aws:iam::xxxxx:role/xxxxxxxxx",
input_transformer=events.CfnRule.InputTransformerProperty(
input_paths_map={"detail-type": "$.detail-type"},
input_template=json.dumps(
{
"transformed": '{"name": "DETAIL_TYPE", "value": <detail-type>}'
}
),
),
)
],
)
I'm trying to create several URLs on my serv thanks to a loop . The issue is that each function I create in a app.route can't have the same name than the others . And I don't know how to create different function names ...
Here is the code :
json_tweets = []
for line in open('C:\Users\Benjamin\Desktop\DashboardProject\last_rated_set.json',"r"):
json_tweets.append(json.loads(line,"ISO-8859-1"))
cashtag_tab = []
for tweet in json_tweets:
if not(tweet['cashtag'] in cashtag_tab) :
cashtag_tab.append(tweet['cashtag'])
for i in range(0,(len(cashtag_tab)-1)) :
var=cashtag_tab[i]
#app.route("/"+var)
def company(var) :
finance=Share(var)
datas = finance.get_historical('2014-01-01', '2014-12-31')
datas = json.dumps(datas, default=json_util.default)
return datas
I'm getting the error AssertionError : View function mapping is overwritting an existing endpoint function : company
This fails because Flask derives the endpoint name from the function by default, but it would anyway fail later because the function company requires an argument var and the route is not parameterised. The simplest option would be just checking the value inside the handler:
#api.route('/<var>')
def company(var):
if var not in cashtag_tab:
abort(404)
If you want all the routes to be in the routing map for any reason, I once needed a similar thing and came up with something like this:
def url_family(source, methods=('GET',)):
def decorator(f):
for entry in source:
# create a handler that delegates to your function
def view_func(entry=entry, **kwargs):
return f(entry, **kwargs)
endpoint = '{0}_{1}'.format(f.__name__, entry)
url = '/{0}'.format(entry)
api.add_url_rule(url,
methods=methods,
endpoint=endpoint,
view_func=view_func)
return decorator
Then you register the handlers as:
#url_family(cashtag_tab)
def company(var):
...
Assuming that you are using flask now, you should consider Custom URL Converter. Check links below
http://flask.pocoo.org/docs/0.10/api/#flask.Flask.url_map - url_map UrlConverter API
https://exploreflask.com/views.html#url-converters - example url converter
https://stackoverflow.com/a/5872904/3451543 - RegexConverter by Philip Southam
Anyway, specifying more details on your question is always helpful to get accurate answer :)
I want to see all the routes which my application has. Return them as a response like key=>value pair:
'route1' => '{foo:\w+}'
'route2' => '{baz:\w+\d+}'
... and so on
But I don't know how to get them within my view. For example, this is my view. I want it to return a map of routes. I do this:
#view_config(route_name='route1')
def someView(request):
routes = request.registry.settings.getRoutes() ## what should I print here to get a map of routes?
r = ''
for k,v in sorted(routes.items()):
r += str(k) + "=>" + str(v) + "<br/>";
return Response(r)
There is a RoutesConfiguratorMixin class with get_routes_mapper method. I tried to import the class and called its method but got an error that no registry was in the instance of it:
from pyramid.config.routes import RoutesConfiguratorMixin as Router
r = Router();
routes = r.get_routes_mapper();
## ... and the same code as above
Doesn't work.
There are 2 ways, one is supported (public) and one is unsupported (private).
Option #1 is to use the introspector and is explained here.
Option #2 is to use the route mapper (which is not a public api), in the way that the pyramid debugtoolbar does in its routes panel.
Pyramid installs a bin script called proutes for that purpose.
Install pshell then
pshell to login to pshell with your app config.
then run
print("\n".join([r.path for r in app.routes_mapper.routelist]))