Im using this lib for a while, everything was working great. Using it to query cpu utilization of gcloud machines.
this is my code:
query_obj = Query(metric_service_client, project, "compute.googleapis.com/instance/cpu/utilization",
minutes=mins_backward_check)
metric_res = query_obj.as_dataframe()
Everything was working fine until lately it started to fail.
I'm getting:
{AttributeError}'WhichOneof'
Deubbing it, i see it fails inside "as_dataframe()" code, specifically in this part:
data=[_extract_value(point.value) for point in time_series.points]
When it tries to extract the value from the point object.
The _extract_value values code seems to use the WhichOneof attribute which seems to be related to protobuff lib.
I didn't change any of those libs versions, anyone has any clue what causes it to fail now?
If you're confident (!) that you've not changed anything, then this would appear to be Google breaking its API and you may wish to file an issue on Google's issue tracker on one of these components:
https://issuetracker.google.com/issues/new?component=187228&template=1162638
https://issuetracker.google.com/issues/new?component=187143&template=800102
I think Cloud Monitoring is natively a gRPC-based API which would explain the protobuf reference.
A good sanity check is to use APIs Explorer and check the method you're using there to see whether you can account for the request|response, perhaps:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/query
NOTE Your question may be easy to parse for someone familiar with the Cloud Monitoring Python SDK but isn't easy to repro. Please consider providing a simple repro of your issue, including requirements.txt and a full code snippet.
Related
Within a framework I am building some functions that run on the main Faas providers (aws,gcp,azure,alicloud). The main function is essentially an elif based on a environment variable deciding which function to call ("do stuff on aws", "do stuff on gcp")etc. The functions essentially just read from the appropriate database (aws->dynamo, gcp->firestore, azure->cosmos).
When uploading my zip to google cloud functions through their web portal, i get the following error:
Function failed on loading user code. Error message: You must specify a region.
I'm concerned it's got something to do with my piplock file, and a clash with the aws dependencies. Not sure though. I cannot find anywhere online where someone has had this error message with gcp (certainly not through using the online console), and only see results for this error with aws.
My requirements.txt file is simply:
google-cloud-firestore==1.4.0
The piplock contains the google requirements, but doesn't state the region anywhere. However, when using the gcp console, it automatically uploads to us-central1.
Found the error in a Google Groups. If anyone else has this problem, it's because you're importing boto and uploading to gcp. GCP say it's boto's fault. So you can either split up your code so that you only bring in necessary gcp files, or wrap your imports in if's based on environment vars.
The response from the gcp Product Manager was "Hi all -- closing this out. Turns out this wasn't an issue in Cloud Functions/gcloud. The error was one emitted by the boto library: "You must specify a region.". This was confusing because the concept of region applies to AWS and GCP. We're making a tweak to our error message so that this should hopefully be a little more obvious in the future."
I am new to python.
I need to get the Usage details using python sdk.
I am able to do the same using the usage detail API.
But unable to do so using the sdk.
I am trying to use the azure.mgmt.consumption.operations.UsageDetailsOperations class. The official docs for UsageDetailsOperations
https://learn.microsoft.com/en-us/python/api/azure-mgmt-consumption/azure.mgmt.consumption.operations.usage_details_operations.usagedetailsoperations?view=azure-python#list-by-billing-period
specifies four parameters to create the object
(i.e.client:Client for service requests,config:Configuration of service client,
serializer:An object model serializer,deserializer:An object model deserializer).
Out of these parameters I only have the client.
I need help understanding how to get the other three parameters or is there another way to create the UsageDetailsOperations object.
Or is there any other approach to get the usage details.
Thanks!
This class is not designed to be created manually, you need to create a consumption client, which will have an attribute "usages" which will be the class in question (instanciated correctly).
There is unfortunately no samples for consumption yet, but creating the client will be similar to creating any other client (see Network client creation for instance).
For consumption, what might help is the tests, since they give some idea of scenarios:
https://github.com/Azure/azure-sdk-for-python/blob/fd643a0/sdk/consumption/azure-mgmt-consumption/tests/test_mgmt_consumption.py
If you're new to Azure and Python, you might want to do this quickstart:
https://learn.microsoft.com/en-us/azure/python/python-sdk-azure-get-started
Feel free to open an issue in the main Python repo, asking for more documentation about this client (this will help prioritize it):
https://github.com/Azure/azure-sdk-for-python/issues
(I'm working at Microsoft in the Python SDK team).
I'm starting to feel a bit stupid. Have someone been able to successfully create an Application gateway using Python SDK for Azure?
The documentation seems ok, but I'm struggling with finding the right parameters to pass 'parameters' of
azure.mgmt.network.operations.ApplicationGatewaysOperations application_gateways.create_or_update(). I found a complete working example for load_balancer but can't find anything for Application gateway. Getting 'string indices must be integers, not str' doesn't help at all. Any help will be appreciated, Thanks!
Update: Solved. An advice for everyone doing this, look carefully for the type of data required for the Application gateway params
I know there is no Python sample for Application Gateway currently, I apologize for that...
Right now I suggest you to:
Create the Network client using this tutorial or this one
Take a look at this ARM template for Application Gateway. Python parameters will be very close from this JSON. At worst, you can deploy an ARM template using the Python SDK too.
Take a look at the ReadTheDocs page of the create operation, will give you the an idea of what is expected as parameters.
Open an issue on the Github tracker, so you can follow when I do a sample (or at least a unit test you can mimic).
Edit after question in comment:
To get the IP of VM once you have a VM object:
# Gives you the ID if this NIC
nic_id = vm.network_profile.network_interfaces[0].id
# Parse this ID to get the nic name
nic_name = nic_id.split('/')[-1]
# Get the NIC instance
nic = network_client.network_interfaces.get('RG', nic_name)
# Get the actual IP
nic.ip_configurations[0].private_ip_address
Edit:
I finally wrote the sample:
https://github.com/Azure-Samples/network-python-manage-application-gateway
(I work at MS and I'm responsible of the Azure SDK for Python)
I'm really confused with the way to try Datastore in local. Please, give me a minute to explain.
I'm developing a app composed to few microservices like a only gae app. In a parte of the app, I use the datastore. So when I run my app, I use the development server and when I save something in the datastore calling some method I can see perfectly the entity in the gae's admin web portal.
Well, now, instead of calling directly to ndb library and his methods I've built a small library over ndb to abstract his functionallity, then I can call insertUser() instead of work directly with ndb. So, the problems appear when I try test this small library that I built (I've written a test.py file to do this).
At first, I thought that this does not can work because this test was executing without the deveserver running. After I searched info about how simulated the datastore in the local and I found this, but after I found too the unittest in local with the stubs, and now I don't understand nothing.
I've tried both (gcloud datastore emulator and stub with unittest) and I don't get do simple example:
I want test that a entity is saved in Datastore and after I want test that I can read this entity
I suppose that dev_server (in SDK) emulate the datastore (because I can see the list of my entities there), but then, why use the datastore emulator in local dev?, and then, why is necesary uses the stub to datastore if we have a datastore emulator to do all test that I want? I don't understand.
I understand that maybe my question is more of concepts than code but I need understand really right how is the best way to work with this.
Finally I think I solved and understood my problem. If I were working with other system that I want connect to Cloud Datastore, I would need use the "emulator". But isn't my case. So, I need use the stubs with unittest because there are not a simple way (I think is imposible) to do this with the dev_server (when he is running).
But i found two mainly problems:
The first, the way to import google_appengine libraries, because in the documentation isn't very clear, (in my view), finally searching user opinions I found that "my solution was something like this":
sys.path.insert(1, '../../../../google_appengine')
if 'google' in sys.modules:
del sys.modules['google']
from google.appengine.ext import ndb
from google.appengine.ext import testbed
The second was that when I execute a test (one of few of I had) the next unittest failed, for example, when in the first unittest, I save the data and in the second I test if the data is saved correctly with a read method.
When I initialized datastore_v3_stub I use save_changes=True to specify that I want the changes be permanent, but when I use it, don't work and I see that the changes maybe don't be saved.
After, I found in the tesbed docs the param datastore_file, when I used this and specify a file where save temporarily the database, all tests began to work fine.
self.testbed.init_datastore_v3_stub(enable=True, save_changes=True, datastore_file='./dbFile')
Besides, I added a final condition (unittest library) to delete this file, so, I erase the file when the test ends. (Avoiding errors in the next execution).
#classmethod
def tearDownClass(self):
"""
Elimina el fichero de la bd temporal tras la ejecuciĆ³n de todos los tests.
"""
os.remove('./dbFile')
I think that GAE and all Google Cloud Platform is a very good solution to develop fast apps but I think too that they need revise and extend his docs, specially to no-experts programmers (like me).
I hope that this solution maybe help someone, if you think that I have some error please comment it.
I'm seeing an issue where the SoftLayer API is missing the serverRoom field for over 75% of our servers. I've confirmed this using both their python and ruby libraries (https://softlayer-api-python-client.readthedocs.org/en/latest/api/managers/hardware/#SoftLayer.managers.hardware.HardwareManager.list_hardware and https://softlayer.github.io/ruby/server_locate/ respectively). Note that the ruby code I'm running is simply one of their published examples.
It seems like SoftLayer has a naming convention of creating FQDN like [dataCenter].[serverRoom].[rackNumber].[slotNumber]. I'm not sure if it is just another indicator of the problem or helpful in troubleshooting the root cause, but the servers that are missing serverRoom seem to be named incorrectly by SoftLayer, according to what appears to be SoftLayer's naming convention. They are named [dataCenter].[rackNumber].[slotNumber], notably missing serverRoom.
Basically it looks like their database (which I assume is backing their API) is just missing the serverRoom for most of the hosts, or they named most of our hosts incorrectly and the database can't account for it, so the info is missing when I call their API. Does anyone have a similar experience where SoftLayer perhaps named things wrong, or forgot to do this data entry, or are there some other/different API calls I should be making beyond what SoftLayer themselves recommend?
I tried to verify and reproduce the issue that you mentioned, but I couldn't. please Submit a ticket with all the information that you can provide to verify and isolate this issue.
SoftLayer support confirmed that there was a some sort of block on hidden sites where this info wasn't displayed via their API. Thanks to #ruber-cuellar was who said something similar in one of his comments, but I disagree that "There is not an issue." From my perspective there definitely was an issue that they (SoftLayer support) needed to resolve on their end before their example API calls started showing us all the info. Special thanks to ALLmightySPIFF on #softlayer who was able to repro the issue for me and provided a realtime response.