ListThings AWS IoT with Python - python

def getthings():
client = boto3.client('iot', region_name='name')
response = client.list_things(nextToken='string', maxResults=123, attributeName='string', attributeValue='string', thingTypeName='string')
I am a beginner in Python, I have the following code that I get from the AWS documentation to have the list of things in AWS IoT. I have the following error :
InvalidRequestException: An error occurred (InvalidRequestException)
when calling the ListThings operation.
What is the problem ?

Though there is not much details to this exception the following steps could help resolve faster.
You can try the with python console first.
You need to make sure you have AWS-CLI installed
Ensure you have AWS configure for configuring your access key and secret keys
By the way the following code worked to determine the list of things:
client=boto3.client('iot')
response = client.list_things(maxResults=123, thingTypeName='appropriate thing type')

Related

gcloud monitoring_v3 query fails on AttributeError 'WhichOneof'

Im using this lib for a while, everything was working great. Using it to query cpu utilization of gcloud machines.
this is my code:
query_obj = Query(metric_service_client, project, "compute.googleapis.com/instance/cpu/utilization",
minutes=mins_backward_check)
metric_res = query_obj.as_dataframe()
Everything was working fine until lately it started to fail.
I'm getting:
{AttributeError}'WhichOneof'
Deubbing it, i see it fails inside "as_dataframe()" code, specifically in this part:
data=[_extract_value(point.value) for point in time_series.points]
When it tries to extract the value from the point object.
The _extract_value values code seems to use the WhichOneof attribute which seems to be related to protobuff lib.
I didn't change any of those libs versions, anyone has any clue what causes it to fail now?
If you're confident (!) that you've not changed anything, then this would appear to be Google breaking its API and you may wish to file an issue on Google's issue tracker on one of these components:
https://issuetracker.google.com/issues/new?component=187228&template=1162638
https://issuetracker.google.com/issues/new?component=187143&template=800102
I think Cloud Monitoring is natively a gRPC-based API which would explain the protobuf reference.
A good sanity check is to use APIs Explorer and check the method you're using there to see whether you can account for the request|response, perhaps:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query
NOTE Your question may be easy to parse for someone familiar with the Cloud Monitoring Python SDK but isn't easy to repro. Please consider providing a simple repro of your issue, including requirements.txt and a full code snippet.

"Function failed on loading user code. Error message: You must specify a region." when uploading function through GCF online console

Within a framework I am building some functions that run on the main Faas providers (aws,gcp,azure,alicloud). The main function is essentially an elif based on a environment variable deciding which function to call ("do stuff on aws", "do stuff on gcp")etc. The functions essentially just read from the appropriate database (aws->dynamo, gcp->firestore, azure->cosmos).
When uploading my zip to google cloud functions through their web portal, i get the following error:
Function failed on loading user code. Error message: You must specify a region.
I'm concerned it's got something to do with my piplock file, and a clash with the aws dependencies. Not sure though. I cannot find anywhere online where someone has had this error message with gcp (certainly not through using the online console), and only see results for this error with aws.
My requirements.txt file is simply:
google-cloud-firestore==1.4.0
The piplock contains the google requirements, but doesn't state the region anywhere. However, when using the gcp console, it automatically uploads to us-central1.
Found the error in a Google Groups. If anyone else has this problem, it's because you're importing boto and uploading to gcp. GCP say it's boto's fault. So you can either split up your code so that you only bring in necessary gcp files, or wrap your imports in if's based on environment vars.
The response from the gcp Product Manager was "Hi all -- closing this out. Turns out this wasn't an issue in Cloud Functions/gcloud. The error was one emitted by the boto library: "You must specify a region.". This was confusing because the concept of region applies to AWS and GCP. We're making a tweak to our error message so that this should hopefully be a little more obvious in the future."

Getting HTTP error 403 - invalid access token while trying to access cluster through Azure databricks

I'm trying to access Azure databricks spark cluster by a python script which takes token as an input generated via databricks user settings and calling a Get method to get the details of the cluster alongwith the cluster-id.
The below is the code snippet. As shown, I have created a cluster in southcentralus zone.
import requests
headers = {"Authorization":"Bearer dapiad************************"}
data=requests.get("https://southcentralus.azuredatabricks.net/api/2.0/clusters/get?cluster_id=**************",headers=headers).text
print data
Expected result should give the full detail of the cluster eg.
{"cluster_id":"0128-******","spark_context_id":3850138716505089853,"cluster_name":"abcdxyz","spark_version":"5.1.x-scala2.11","spark_conf":{"spark.databricks.delta.preview.enabled":"true"},"node_type_id" and so on .....}
The above code is working when I execute the code on google colaboratory whereas the same is not working with my local IDE i.e. idle. It gives the error of HTTP 403 stating as below:
<p>Problem accessing /api/2.0/clusters/get. Reason:
<pre> Invalid access token.</pre></p>
Can anyone help me resolve the issue? I'm stuck on this part and not able to access the cluster through APIs.
It could be due to encoding issue when you pass the secret. Please look into this issue and how to resolve it. Even though the resolution they have given for AWS,it could be similar one for Azure as well. Your secret might be having "/", which you have to replace.
There is a known problem in the last update related to the '+'
character in secret keys. In particular, we no longer support escaping
'+' into '%2B', which some URL-encoding libraries do.
The current best-practice way of encoding your AWS secret key is
simply
secretKey.replace("/","%2F")
sample python script is given below:
New_Secret_key = "MySecret/".replace("/","%2F")
https://forums.databricks.com/questions/6590/s3serviceexception-raised-when-accessing-via-mount.html
https://forums.databricks.com/questions/6621/responsecode403-responsemessageforbidden.html

Boto3 get InvalidClientTokenId when using update_service_specific_credential

I want to change Git credential for AWS CodeCommit to Active/Inactive using Boto3.
I tried to use update_service_specific_credential but I got this error:
An error occurred (InvalidClientTokenId) when calling the CreateServiceSpecificCredential operation: The security token included in the request is invalid: ClientError
My code:
iamClient = boto3.client('iam')
response=iamClient.update_service_specific_credential(UserName="****",
ServiceSpecificCredentialId="*****",Status="Active")
someone tried to use it?
Any advice?
Thanks!
AWS errors are often purposefully opaque/non-specific so could you give a bit more detail? Specifically, are the user performing the update and the user whose credentials are being updated two different users? There may be a race condition arising if the user being updated IS the user performing the update.

How to create Azure Application gateway using python SDK

I'm starting to feel a bit stupid. Have someone been able to successfully create an Application gateway using Python SDK for Azure?
The documentation seems ok, but I'm struggling with finding the right parameters to pass 'parameters' of
azure.mgmt.network.operations.ApplicationGatewaysOperations application_gateways.create_or_update(). I found a complete working example for load_balancer but can't find anything for Application gateway. Getting 'string indices must be integers, not str' doesn't help at all. Any help will be appreciated, Thanks!
Update: Solved. An advice for everyone doing this, look carefully for the type of data required for the Application gateway params
I know there is no Python sample for Application Gateway currently, I apologize for that...
Right now I suggest you to:
Create the Network client using this tutorial or this one
Take a look at this ARM template for Application Gateway. Python parameters will be very close from this JSON. At worst, you can deploy an ARM template using the Python SDK too.
Take a look at the ReadTheDocs page of the create operation, will give you the an idea of what is expected as parameters.
Open an issue on the Github tracker, so you can follow when I do a sample (or at least a unit test you can mimic).
Edit after question in comment:
To get the IP of VM once you have a VM object:
# Gives you the ID if this NIC
nic_id = vm.network_profile.network_interfaces[0].id
# Parse this ID to get the nic name
nic_name = nic_id.split('/')[-1]
# Get the NIC instance
nic = network_client.network_interfaces.get('RG', nic_name)
# Get the actual IP
nic.ip_configurations[0].private_ip_address
Edit:
I finally wrote the sample:
https://github.com/Azure-Samples/network-python-manage-application-gateway
(I work at MS and I'm responsible of the Azure SDK for Python)

Categories

Resources