How can I retrieve usage and cost data for my IBM Cloud account using a REST API? I found that there are billing related commands and I can export some data as JSON. Is there an API or SDK I can use, ideally Python?
Here are some of the IBM Cloud billing commands I use:
ibmcloud billing resource-instances-usage --json
ibmcloud billing account-usage --json
Are there equivalent APIs for that?
UPDATED:
An API is now documented here: https://cloud.ibm.com/apidocs/metering-reporting
OLD:
I couldn't find a documented API but used the tracing to see how the above commands are executed. Using a valid access_token a program can call the metering host and obtain usage data for an account, resource group or all resource instances:
A GET on the following URL with an account ID and month as YYYY-MM returns a JSON object with all resource usage and related cost:
https://metering-reporting.ng.bluemix.net/v4/accounts/account_id/resource_instances/usage/?_limit=100&_names=true
I coded up a small Python script that dumps that data or prints it as CSV.
def processResourceInstanceUsage(account_id, billMonth):
METERING_HOST="https://metering-reporting.ng.bluemix.net"
USAGE_URL="/v4/accounts/"+account_id+"/resource_instances/usage/"+billMonth+"?_limit=100&_names=true"
url=METERING_HOST+USAGE_URL
headers = {
"Authorization": "{}".format(iam_token),
"Accept": "application/json",
"Content-Type": "application/json"
}
response=requests.get(url, headers=headers)
print ("\n\nResource instance usage for first 100 items")
return response.json()
The GitHub repo openwhisk-cloud-usage-samples uses a serverless approach to getting data via APIs. Examples are included in the repo. It's written in Javascript, but a package it uses openwhisk-jsonetl was designed so that you could declare the URLs and parameters in YAML (rather than writing code) to request and transform JSON.
Related
I am using this api url for getting virtual machine compliance status, however its giving me 202 Accepted response.
I have attached image for reference, I wanted to get these information through api.
I tried to reproduce the same in my environment and got below results:
I created one Azure AD application named WebApp and granted API permission like below:
I generated access token via Postman with below parameters:
POST https://login.microsoftonline.com/<tenantID>/oauth2/v2.0/token
client_id: appID
grant_type: client_credentials
client_secret: secret
scope: https://management.azure.com/.default
Response:
Using above token, I too got same response like 202 Accepted while running the same query as you like below:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/assessPatches?api-version=2022-08-01
Response:
Currently, there is no such REST API query that can fetch you all virtual machine's compliance status. To confirm that, you can check this.
Alternatively, you can make use of Kusto query.
I have below information on virtual machine's compliance status in my automation account:
To get this information from Kusto query, you need to follow below steps:
Go to Azure Portal -> Automation Accounts -> Select account -> Logs -> Azure Update Management -> Computers list -> Run
When I ran the query with scope as subscription, I got the results successfully like below:
Reference:
Retrieve azure update management status using REST API by VenkateshDodda-MSFT
I am trying to request the usage metrics from a virtual machine I have running on Azure Devops. I know it's online because i've sent a ping. However, every time I try to run the program with the correct Get information filled in it gives me an error:
{"error":{"code":"AuthenticationFailed","message":"Authentication failed. The 'Authorization' header is missing."}}
I am following the instructions here: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/metrics-vm-usage-rest
import requests
BASE_URL = "GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmname}/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=Percentage%20CPU×pan=2018-06-05T03:00:00Z/2018-06-07T03:00:00Z"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {myPAT}"
}
response = requests.get(BASE_URL,headers)
print(response.text)
The bug lies in my Authorization header, what am I missing?
Edit: Actually this question Is there a way to call Azure Devops via python using 'requests'? solved my issue but now I have another error "{"error":{"code":"InvalidAuthenticationToken","message":"The access token is invalid."}}". I am reading the docs. https://learn.microsoft.com/en-us/azure/active-directory-b2c/access-tokens Thank you.
Basically, you cannot use the Azure DevOps PAT. You need to Create a Service Principal and Request the Access Token by following this document : Azure REST API Reference
It's easy to use curl to achieve that, please refer to Calling Azure REST API via curl for details.
And as mentioned in the blog, if you need a token just to run some test and you don’t want to go through Service Principal creation, then you can just run below command to get the access token. You’ll get your access token with a maximum validity of 1 hour.
az account get-access-token
After that you can use the access token in your script.
For weather processing purpose, I am looking to retrieve automatically daily weather forecast data in Google Cloud Storage.
The files are available on public HTTP URL (http://dcpc-nwp.meteo.fr/openwis-user-portal/srv/en/main.home), but they are very large (between 30 and 300 Megabytes). Size of files is the main issue.
After looking at previous stackoverflow topics, I have tried two unsuccessful methods:
1/ First attempt via urlfetch in Google App Engine
from google.appengine.api import urlfetch
url = "http://dcpc-nwp.meteo.fr/servic..."
result = urlfetch.fetch(url)
[...] # Code to save in a Google Cloud Storage bucket
But I get the following error message on the urlfetch line :
DeadlineExceededError: Deadline exceeded while waiting for HTTP response from URL
2/ Second attempt via the Cloud Storage Transfert Service
According to the documentation, it is possible to retrieve HTTP Data into Cloud Storage directly via the Cloud Storage Transfert Service :
https://cloud.google.com/storage/transfer/reference/rest/v1/TransferSpec#httpdata
But it requires the size and md5 of the files before the download. This option cannot work in my case because the website does not provide those information.
3/ Any ideas ?
Do you see any solution to retrieve automatically large file on HTTP into my Cloud Storage bucket?
3/ Workaround with a Compute Engine instance
Since it was not possible to retrieve large files from external HTTP with App Engine or directly with Cloud Storage, I have used a workaround with an always-running Compute Engine instance.
This instance regularly checks if new weather files are available, downloads them and uploads them to a Cloud Storage bucket.
For scalability, maintenance and cost reasons, I would have prefered to use only serverless services, but hopefully :
It works well on a fresh f1-micro Compute Engine instance (no extra package required and only 4$/month if running 24/7)
The network traffic from Compute Engine to Google Cloud Storage is free if the instance and the bucket are in the same region (0$/month)
The md5 and size of the file can be retrieved easily and quickly using curl -I command as mentioned in this link https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests.
The Storage Transfer Service can then be configured to use that information.
Another option would be to use a serverless Cloud Function. It could look like something below in Python.
import requests
def download_url_file(url):
try:
print('[ INFO ] Downloading {}'.format(url))
req = requests.get(url)
if req.status_code==200:
# Download and save to /tmp
output_filepath = '/tmp/{}'.format(url.split('/')[-1])
output_filename = '{}'.format(url.split('/')[-1])
open(output_filepath, 'wb').write(req.content)
print('[ INFO ] Successfully downloaded to output_filepath: {} & output_filename: {}'.format(output_filepath, output_filename))
return output_filename
else:
print('[ ERROR ] Status Code: {}'.format(req.status_code))
except Exception as e:
print('[ ERROR ] {}'.format(e))
return output_filename
Currently, the MD5 and size are required for Google's Transfer Service; we understand that in cases like yours, this can be difficult to work with, but unfortunately we don't have a great solution today.
Unless you're able to get the size and MD5 by downloading the files yourself (temporarily), I think that's the best you can do.
I have the same question as in the link below. That question remains unanswered:
Why different requests return same geolocation with google geolocation api
I have the same question, Why do I get the same google data response on every request? The data I get is always the same.
If I send the curl POST request to Google, I get the same response no matter the value in the JSON key value.
If I send POST request using Python 2.7 using requests I always to the same response no mater the value I set in my variable use for the URL POST request.
Any ideals, what key values would be needed, so I can pull the desired data. For example I want to parse the "locationAreaCode" for various area codes, and I want the request to return the "lat" and "lng" for each lookup.
Using the terminal in MAC OS:
curl -d #your_filename.json -H "Content-Type: application/json" -i https://www.googleapis.com/geolocation/v1/geolocate?key=[use your Google API key]
Note: "your_filename.json" is the literal name for the .json file.
This .json file is currently configured below(I have tried various key values):
[
"cellTowers",
[
"locationAreaCode",
415
]
]
When I use python 2.7 with the request syntax, I will get a "response 200" and I will get the exact same data returned.
I always get the same response:
Confirming Status Code is: 200
This is the POST url sent to Google
[link - had to remove since new user to stackoverflow]
Data Returned on the POST request is:
{
"location": {
"lat": 25.7459338,
"lng": -80.30449569999999
},
"accuracy": 37571.0
}
Output from python 2.7:
python API_json2-6.py
[link - had to remove since new user to stackoverflow]
Status Code is: 200
Confirming Status Code is: 200
This is the POST url sent to Google
https://www.googleapis.com/geolocation/v1/geolocate?key=[your google api KEY]&locationAreaCode=415
Data Returned on the POST request is:
{
"location": {
"lat": 25.7459338,
"lng": -80.30449569999999
},
"accuracy": 37571.0
}
Thanks!
As detailed in Geocoding Strategies, the basic architecture for a server-side geocoding application is the following:
A server based application sends an address to the server's geocoding script.
The script checks the cache to see if the address has recently been geocoded.
If it has, the script returns that geocode to the original application.
If it has not, the script sends a geocoding request to Google. Once it has a result, it caches it, and then returns the geocode to the original application.
Sometime later, the geocode is used to display data on a Google Map.
Please note also on the given quota considerations for the things to be avoided when running client-side geocode requests at periodic intervals and on caching considerations wherein The Google Maps API allows you to cache geocodes (i.e. store them on your server for a limited period). Caching can be useful if you have to repeatedly look up the same address.
I have written python scripts to list repositories and commits.
To create a new repository, I have used the following code:
curl -F 'login=SolomonPeter26' -F 'token=mygithubapitoken' https://github.com/api/v2/json/repos/create -F 'name=REPONAME' -F 'description=This project is a test'
I can't access github API token of other users. So I couldn't write a python script for that sake.
Please suggest a better way to create such a new repository or a way to access the Github API token.( Can I get any help from oauth or oauth2)
Yeah. You can't access API tokens of other users. It's same with twitter. You need to use Oauth2, and each user should get the API keys\tokens and add them manually in the code. What you can do is provide an easy way for your users to add github API token.
I use Postman (a Chrome plug-ins). It can test success:
With the access_token, you can find personal access tokens of your account's setting.
There is a generic formula on Ritchie CLI to create a Github repository, using the user's Github API token.
Obs: The user will have to set the token manually the first time the command will be executed on the terminal (to save it locally).
Here are the code and the README file of this formula (in Python)
Here is an example of how it consumes the POST resource to create a repository with the Github API:
authorization = f'token {token}'
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization" : authorization,
}
r = requests.post(
url='https://api.github.com/user/repos',
data=json_data,
headers=headers
)
As you can customize the commands using the tool, it is possible to automate many other operations with Github API the same way.