Cannot create a Elasticache cluster in sa-east-1 region using boto - python

I'm trying to create a Redis Elasticache cluster using boto in the sa-east-1 region, and boto is giving me this error message:
{"Error":{"Code":"InvalidParameterValue","Message":"sa-east-1 is not a valid availability zone.","Type":"Sender"},"RequestId":"2q34hj192-6902-11e4-8b4a-afafaefasefsadfsadf"}
with this code:
from boto.elasticache.layer1 import ElastiCacheConnection
self.elasticache = ElastiCacheConnection()
boto.elasticache.connect_to_region(
'sa-east-1a',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_KEY
)
elasticache.create_cache_cluster(
cache_cluster_id='test1',
engine='redis',
cache_node_type='cache.m3.medium',
num_cache_nodes=1,
preferred_availability_zone='sa-east-1',
)
Thanks

It's asking you for an availability zone but you are providing it with a region. Correct values would be one of sa-east-1a or sa-east-1b or just leave it blank if you have no preference.

After searching in boto code, I found that
elasticache = boto.elasticache.connect_to_region(
'sa-east-1',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_KEY
)
elasticache.create_cache_cluster(
cache_cluster_id=cache_cluster_id,
engine=engine,
cache_node_type=cache_node_type,
num_cache_nodes=num_cache_nodes,
preferred_availability_zone='sa-east-1a',
preferred_maintenance_window=preferred_maintenance_window,
)
works.

Related

Firestore Python SDK - count() aggregation

I'm using Firebase Firestore in my Python project (with their official Python SDK) and having trouble performing count() aggregation. This funciton is supported according to their docs. However, they do not provide Python example ( they do in other parts of documentation ). I tried to play with it in Python console, tried something like this:
query = db.collection('videos').where('status', '==', 'pending')
query.count()
without any luck. So I'm wondering how is it possible to implement? Does Python SDK support this functionality?
Firebase Admin Python SDK doesn't support that query yet. You can still use the runAggregationQuery REST API meanwhile. The Google Cloud Firestore Python SDK has Aggregation result types available from v2.7.0+ so it should be available in Admin SDK soon.
Although the API for this purpose is not fully ready but it's coming along. If you don't wanna use the REST API as suggested by #Dharmaraj, you can do something like this for now:
from google.cloud.firestore_v1.services.firestore import FirestoreClient
from google.cloud.firestore_v1.types.document import Value
from google.cloud.firestore_v1.types.firestore import RunAggregationQueryRequest
from google.cloud.firestore_v1.types.query import (
StructuredAggregationQuery,
StructuredQuery,
)
Aggregation = StructuredAggregationQuery.Aggregation
CollectionSelector = StructuredQuery.CollectionSelector
Count = Aggregation.Count
FieldFilter = StructuredQuery.FieldFilter
FieldReference = StructuredQuery.FieldReference
Filter = StructuredQuery.Filter
Operator = StructuredQuery.FieldFilter.Operator
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id="videos")],
where=Filter(
field_filter=FieldFilter(
field=FieldReference(
field_path="status",
),
op=Operator.EQUAL,
value=Value(string_value="pending"),
)
),
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
print(next(stream).result.aggregate_fields["field_1"].integer_value)
Output:
1
Generally the following would work to count the total number of documents in a collection:
def count_documents(collection_id: str) -> int:
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id=collection_id)]
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
return next(stream).result.aggregate_fields["field_1"].integer_value
print(count_documents(collection_id="videos"))
Output:
10
Make sure that you have google-cloud-firestore>=2.7.3 and also remember to set the value of project_id variable in the count_documents function accordingly.

How To Export GCP Security Command Center Findings To BigQuery?

Similar to this: How to export GCP's Security Center Assets to a Cloud Storage via cloud Function?
I need to export the Findings as seen in the Security Command Center to BigQuery so we can easily filter the data we need and generate custom reports.
Using this documentation as an example (https://cloud.google.com/security-command-center/docs/how-to-api-list-findings#python), I wrote the following:
from google.cloud import securitycenter
from google.cloud import bigquery
JSONPath = "Path to JSON File For Service Account"
client = securitycenter.SecurityCenterClient().from_service_account_json(JSONPath)
BQclient = bigquery.Client().from_service_account_json(JSONPath)
table_id = "project.security_center.assets"
org_name = "organizations/1234567891011"
all_sources = "{org_name}/sources/-".format(org_name=org_name)
finding_result_iterator = client.list_findings(request={"parent": all_sources})
for i, finding_result in enumerate(finding_result_iterator):
errors = BQclient.insert_rows_json(table_id, finding_result)
if errors == []:
print("New rows have been added.")
else:
print("Encountered errors while inserting rows: {}".format(errors))
However, that then gave me the error:
"json_rows argument should be a sequence of dicts".
Any help with this would be greatly appreciated :)
Not sure if this existed back then in Q2 of 2021, but now there is documentation telling how to do this:
https://cloud.google.com/security-command-center/docs/how-to-analyze-findings-in-big-query
You can create exports of SCC findings to bigquery using this command:
gcloud scc bqexports create BIG_QUERY_EXPORT \
--dataset=DATASET_NAME \
--folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID \
[--description=DESCRIPTION] \
[--filter=FILTER]
Filter will allow to filter out unwanted findings (they will be in SCC, but won't be copied to the BigQuery).
It's useful if you want to export findings from one project or selected categories only. (Use -category:CATEGORY to exclude categories, works the same on different parameters as well).
I managed to sort this by writing:
for i, finding_result in enumerate(finding_result_iterator):
rows_to_insert = [
{u"category": finding_result.finding.category, u"name": finding_result.finding.name, u"project": finding_result.resource.project_display_name, u"external_uri": finding_result.finding.external_uri},
]

Query aws to list all resources using boto3 python sdk

Is there a way to get all the resources in the aws account through python code using boto3. I went through the documentation, didn't find any list function which might solve this.
Try this command.
but prerequisite enable aws config for this region before running this command.
import boto3
session = boto3.Session(profile_name=’your-profilename’)
client = session.client(‘config’)
resources = ["AWS::EC2::CustomerGateway", "AWS::EC2::EIP", "AWS::EC2::Host", "AWS::EC2::Instance", "AWS::EC2::InternetGateway", "AWS::EC2::NetworkAcl", "AWS::EC2::NetworkInterface", "AWS::EC2::RouteTable", "AWS::EC2::SecurityGroup", "AWS::EC2::Subnet", "AWS::CloudTrail::Trail", "AWS::EC2::Volume", "AWS::EC2::VPC", "AWS::EC2::VPNConnection", "AWS::EC2::VPNGateway", "AWS::EC2::RegisteredHAInstance", "AWS::EC2::NatGateway", "AWS::EC2::EgressOnlyInternetGateway", "AWS::EC2::VPCEndpoint", "AWS::EC2::VPCEndpointService", "AWS::EC2::FlowLog", "AWS::EC2::VPCPeeringConnection", "AWS::IAM::Group", "AWS::IAM::Policy", "AWS::IAM::Role", "AWS::IAM::User", "AWS::ElasticLoadBalancingV2::LoadBalancer", "AWS::ACM::Certificate", "AWS::RDS::DBInstance", "AWS::RDS::DBParameterGroup", "AWS::RDS::DBOptionGroup", "AWS::RDS::DBSubnetGroup", "AWS::RDS::DBSecurityGroup", "AWS::RDS::DBSnapshot", "AWS::RDS::DBCluster", "AWS::RDS::DBClusterParameterGroup", "AWS::RDS::DBClusterSnapshot", "AWS::RDS::EventSubscription", "AWS::S3::Bucket", "AWS::S3::AccountPublicAccessBlock", "AWS::Redshift::Cluster", "AWS::Redshift::ClusterSnapshot", "AWS::Redshift::ClusterParameterGroup", "AWS::Redshift::ClusterSecurityGroup", "AWS::Redshift::ClusterSubnetGroup", "AWS::Redshift::EventSubscription", "AWS::SSM::ManagedInstanceInventory", "AWS::CloudWatch::Alarm", "AWS::CloudFormation::Stack", "AWS::ElasticLoadBalancing::LoadBalancer", "AWS::AutoScaling::AutoScalingGroup", "AWS::AutoScaling::LaunchConfiguration", "AWS::AutoScaling::ScalingPolicy", "AWS::AutoScaling::ScheduledAction", "AWS::DynamoDB::Table", "AWS::CodeBuild::Project", "AWS::WAF::RateBasedRule", "AWS::WAF::Rule", "AWS::WAF::RuleGroup", "AWS::WAF::WebACL", "AWS::WAFRegional::RateBasedRule", "AWS::WAFRegional::Rule", "AWS::WAFRegional::RuleGroup", "AWS::WAFRegional::WebACL", "AWS::CloudFront::Distribution", "AWS::CloudFront::StreamingDistribution", "AWS::Lambda::Alias", "AWS::Lambda::Function", "AWS::ElasticBeanstalk::Application", "AWS::ElasticBeanstalk::ApplicationVersion", "AWS::ElasticBeanstalk::Environment", "AWS::MobileHub::Project", "AWS::XRay::EncryptionConfig", "AWS::SSM::AssociationCompliance", "AWS::SSM::PatchCompliance", "AWS::Shield::Protection", "AWS::ShieldRegional::Protection", "AWS::Config::ResourceCompliance", "AWS::LicenseManager::LicenseConfiguration", "AWS::ApiGateway::DomainName", "AWS::ApiGateway::Method", "AWS::ApiGateway::Stage", "AWS::ApiGateway::RestApi", "AWS::ApiGatewayV2::DomainName", "AWS::ApiGatewayV2::Stage", "AWS::ApiGatewayV2::Api", "AWS::CodePipeline::Pipeline", "AWS::ServiceCatalog::CloudFormationProvisionedProduct", "AWS::ServiceCatalog::CloudFormationProduct", "AWS::ServiceCatalog::Portfolio"]
for resource in resources:
response = client.list_discovered_resources(resourceType=resource)
print(‘##################### {} #################’.format(resource))
for i in range(len(response[‘resourceIdentifiers’])):
print( ‘{} , {}’.format(response[‘resourceIdentifiers’][i][‘resourceType’], response[‘resourceIdentifiers’][i][‘resourceId’]))
In boto3 you can use ResourceGroupsTaggingAPI method get_resources(). Which is used to get resources mainly based on tags but you can leave blank tag filter parameter and get all the resources supported.
Consider that not all resources are included and it is limited to a specific region but I hope that it can help you.
Examples:
Get all resources:
import boto3
client = boto3.client('resourcegroupstaggingapi')
client.get_resources()
Get resources of an especific service type:
import boto3
client = boto3.client('resourcegroupstaggingapi')
client.get_resources(
ResourceTypeFilters=[
'ec2:instance'
])
Official documentation:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/resourcegroupstaggingapi.html#ResourceGroupsTaggingAPI.Client.get_resources
Following on from the answer Omar gave, I came up with the following:
from pprint import pprint
import boto3
from botocore.exceptions import ClientError
client = boto3.client('resourcegroupstaggingapi', )
regions = boto3.session.Session().get_available_regions('ec2')
for region in regions:
print(region)
try:
client = boto3.client('resourcegroupstaggingapi', region_name=region)
pprint([x.get('ResourceARN') for x in client.get_resources().get('ResourceTagMappingList')])
except ClientError as e:
print(f'Could not connect to region with error: {e}')
print()
which will loop most regions and list any ARNs in that region, like so:
eu-north-1
[]
eu-west-1
['arn:aws:mq:eu-west-1:xxxxxxxxxxxx:broker:example:b-125099aa-8e22-462e-a8e9-bcc6b29c010a']
eu-west-2
[]
eu-west-3
[]

How to get ID of EMR matching specific name only with boto3

How do I get a list of AWS EMR cluster IDs matching a specific name with boto3?
I have this code here:
import sys
import time
import boto3
client = boto3.client("emr")
cluster_name = 'Adhoc-CSDP-EMR'
response = client.list_clusters(
ClusterStates=[
'RUNNING', 'WAITING'
]
)
for cluster in response['Clusters']:
print(cluster['Name'])
print(cluster['Id'])
That will print all clusters in the running or waiting state. How do I filter the results that match cluster_name?
Umm, why can't we do something like this?
matching_cluster_ids = list()
for cluster in response['Clusters']:
if cluster_name == cluster['Name']:
matching_cluster_ids.append(cluster['Id'])
Later you can execute a describe_cluster() (or any other operation) on any of the matching cluster_ids if you want.

Python - OTRS REST/SOAP API - python-otrs

Using a rest/soap API like python-otrs or pyotrs is possible to close a ticket?
using python-otrs i try and receive a error:
otrs.client.SOAPError: TicketUpdate: Ticket->StateID or Ticket->State parameter is invalid! (TicketUpdate.InvalidParameter)
The code i try is:
from otrs.ticket.template import GenericTicketConnectorSOAP
from otrs.client import GenericInterfaceClient
from otrs.ticket.objects import Ticket, Article, DynamicField, Attachment
server_uri = r'https://www.example.com'
webservice_name = 'GenericTicketConnectorSOAP'
client = GenericInterfaceClient(server_uri, tc=GenericTicketConnectorSOAP(webservice_name))
# user session
client.tc.SessionCreate(user_login='user', password='pass')
t_upd = Ticket(State='closed',StateID='3')
client.tc.TicketUpdate(3657,ticket=t_upd)
where 3657 is the id of the ticket.
Thanks,
jp
I think the correct state name is closed successful but you don't need to ad both the name and the ID, just one of them should be enough.
Refer the key exactly used in the backend. You sould able check/match in the admin state aswell. The value of parameter should always match to the unique key used in the status table.

Categories

Resources