I'm trying to list the subscriptions in an Azure account using azure-python-sdk.
I have followed this link in documentation.
https://learn.microsoft.com/en-us/python/api/azure-mgmt-subscription/azure.mgmt.subscription.operations.subscriptionsoperations?view=azure-python#list-custom-headers-none--raw-false----operation-config-
from azure.mgmt.subscription import SubscriptionClient
from msrestazure.azure_active_directory import UserPassCredentials
credentials = UserPassCredentials(username='xxxx', password='xxxx')
sub_client = SubscriptionClient(credentials)
subs = [sub.as_dict() for sub in sub_client.subscriptions.list()]
print(subs)
It is supposed to return a list of subscriptions.
However, I see only empty list returned every time I try the above code.
Can anybody help?
Try this code,
def list_subscriptions():
try:
sub_client = get_client_from_cli_profile(SubscriptionClient)
except CLIError:
logger.info("Not logged in, running az login")
_run_az_cli_login()
sub_client = get_client_from_cli_profile(SubscriptionClient)
return [["Subscription_name", "Subscription ID"]] + [
[sub.display_name, sub.subscription_id]
for sub in sub_client.subscriptions.list()
]
You can find the handy tool from here
If the list is empty and you get not exception, it's likely your credentials are correct (no exception), but your user doesn't have access to subscriptions (no permissions)
In the Azure portal, in the subscription panel you have a button "Access control (IAM)" to define what users are allowed to a given subscription.
https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal
https://learn.microsoft.com/azure/role-based-access-control/rbac-and-directory-admin-roles
(I work at MS in the SDK team)
I think I solved the issue using Azure CLI. Yet, I still wonder why it didn't work as supposed using azure-python-sdk.
Here is the code:
import subprocess
import json
subscriptions = json.loads(subprocess.check_output('az account list', shell=True).decode('utf-8'))
print(subscriptions)
Thank you for your responses.
I have a similar problem, so I have used AzureCliCredential and it simply worked.
The code is this:
def subscription_list():
credential = AzureCliCredential()
subscription_client = SubscriptionClient(credential)
sub_list = subscription_client.subscriptions.list()
column_width = 40
print("Subscription ID".ljust(column_width) + "Display name")
print("-" * (column_width * 2))
for group in list(sub_list):
print(f'{group.subscription_id:<{column_width}}{group.display_name}')
Before trying this code, you have to log to Azure through the command line in your dev environment.
Related
I am just starting with InfluxDB.
On the UI there is a section for Python to start with the database with an example: initialize the client, write data, make a request and close the client. In order to authenticate the request you have to generate a token on the UI which allows to secure the request. But when I run the example code I get the error "can only concatenate str (not "NoneType") to str" from line 12 of my code.
The code :
from datetime import datetime
import os
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
# You can generate an API token from the "API Tokens Tab" in the UI
token = os.getenv("INFLUX_TOKEN")
org = "XXXXXXX"
bucket = "XXXXXXX Bucket"
with InfluxDBClient(url="https://europe-west1-1.gcp.cloud2.influxdata.com", token=token, org=org) as client:
write_api = client.write_api(write_options=SYNCHRONOUS)
point = Point("mem") \
.tag("host", "host1") \
.field("used_percent", 23.43234543) \
.time(datetime.utcnow(), WritePrecision.NS)
write_api.write(bucket, org, point)
query = """from(bucket: "XXXXXXX Bucket") |> range(start: -1h)"""
tables = client.query_api().query(query, org=org)
for table in tables:
for record in table.records:
print(record)
client.close()
I understand that the problem come from the os.getenv("INFLUX_TOKEN") because it's supposed to return a string but actually return NoneType object, but I dont know why it doesn't work. With the token I got 2 things : the name of the token and its code which I obtain when when it's created.
I've tried :
os.getenv("the name of the token")
os.getenv("the code of the token")
same of 2 above but with ' instead of "
All the time the same error, so please if someone can help me!
You actually have a problem with your environment variable if
os.getenv("INFLUX_TOKEN") returns None it means that INFLUX_TOKEN variable is not set.
You can manually set it in your shell, after you generate the token by typing in your ssh shell
export INFLUX_TOKEN="your-influx-token"
You can find more about the environment variables here
I'm new to Python and KivyMD. Also to working with Databases. I want to check if the data provided by the user using the KivyMD App is already in the Firebase Realtime Database. These are the data in the firebase.
The Code
def send_data(self, email):
from firebase import firebase
firebase = firebase.FirebaseApplication("https://infinity-mode-default-rtdb.firebaseio.com/", None)
data = {
'Email' : email
}
if email.split() == []:
cancel_btn_checkpoint_dialogue = MDFlatButton(text='Retry', on_release=self.close_checkpoint_dialogue)
self.checkpoint_dialog = MDDialog(title='Access Denied', text="Invalid Username"),
buttons=[cancel_btn_checkpoint_dialogue])
self.checkpoint_dialog.open()
else:
firebase.post('Users', data)
If the user enters an existing value in the database, that value should not be saved in the database. Also a Dialog box should be shown that the email is already in use. If the value provided by the user is not in the database it should be saved. Please help me to do this.
Did you try with firebase_admin?
I check my data with like this:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
cred = credentials.Certificate(
"firestore-cred.json"
)
firebase_admin.initialize_app(cred)
db = firestore.client()
data = {
"Email" : email
}
query_email = db.collection(u'Users') \
.where(u"Email",u"==",data["Email"]) \
.get()
if query_email:
# exists
...
else:
# does not exist
...
If user email does not exist, query_email is going to be empty list.
Do not forget that query_email is not json data. You can convert it json with to_dict():
email_query_result_as_json = query_email[0].to_dict()
I'm guessing you use python-firebase which so far has very little documentation. So I'll try to do my best based on the package source...
You have to use firebase.get method to retrieve a dictionary with the current data stored for a given user, then check if that data contains the information you are going to add (untested, since firebase doesn't even import on my machine):
record = firebase.get('/Users', 'id')
if not record.get('Email'):
firebase.post('/Users/'+'id', data)
Try using the dataSnapshot.exist() method or the snapshot.hasChild method, they work in python, I'm not 100% sure whether they work in python but it is worth a go.
I'm trying to create a storage account in Azure and upload a blob into it using their python SDK.
I managed to create an account like this:
client = get_client_from_auth_file(StorageManagementClient)
storage_account = client.storage_accounts.create(
resourceGroup,
name,
StorageAccountCreateParameters(
sku=Sku(name=SkuName.standard_ragrs),
enable_https_traffic_only=True,
kind=Kind.storage,
location=region)).result()
The problem is that later I'm trying to build a container and I don't know what to insert as "account_url"
I have tried doing:
client = get_client_from_auth_file(BlobServiceClient, account_url=storage_account.primary_endpoints.blob)
return client.create_container(name)
But I'm getting:
azure.core.exceptions.ResourceNotFoundError: The specified resource does not exist
I did manage to create a container using:
client = get_client_from_auth_file(StorageManagementClient)
return client.blob_containers.create(
resourceGroup,
storage_account.name,
name,
BlobContainer(),
public_access=PublicAccess.Container
)
But later when I'm trying to upload a blob using BlobServiceClient or BlobClien I still need the "account_url" so I'm still getting an error:
azure.core.exceptions.ResourceNotFoundError: The specified resource does not exist
Anyone can help me to understand how do I get the account_url for a storage account I created with the SDK?
EDIT:
I managed to find a workaround to the problem by creating the connection string from the storage keys.
storage_client = get_client_from_auth_file(StorageManagementClient)
storage_keys = storage_client.storage_accounts.list_keys(resource_group, account_name)
storage_key = next(v.value for v in storage_keys.keys)
return BlobServiceClient.from_connection_string(
'DefaultEndpointsProtocol=https;' +
f'AccountName={account_name};' +
f'AccountKey={storage_key};' +
'EndpointSuffix=core.windows.net')
This works but I thin George Chen answer is more elegant.
I could reproduce this problem, then I found get_client_from_auth_file could not pass the credential to the BlobServiceClient, cause if just create BlobServiceClient with account_url without credential it also could print the account name.
So if you want to use a credential to get BlobServiceClient, you could use the below code, then do other operations.
credentials = ClientSecretCredential(
'tenant_id',
'application_id',
'application_secret'
)
blobserviceclient=BlobServiceClient(account_url=storage_account.primary_endpoints.blob,credential=credentials)
If you don't want this way, you could create the BlobServiceClient with the account key.
client = get_client_from_auth_file(StorageManagementClient,auth_path='auth')
storage_account = client.storage_accounts.create(
'group name',
'account name',
StorageAccountCreateParameters(
sku=Sku(name=SkuName.standard_ragrs),
enable_https_traffic_only=True,
kind=Kind.storage,
location='eastus',)).result()
storage_keys = client.storage_accounts.list_keys(resource_group_name='group name',account_name='account name')
storage_keys = {v.key_name: v.value for v in storage_keys.keys}
blobserviceclient=BlobServiceClient(account_url=storage_account.primary_endpoints.blob,credential=storage_keys['key1'])
blobserviceclient.create_container(name='container name')
I was creating an application of office which contain multiple user credentials and do the tasks like emailing and adding calender events. I choosed O365. All things were great here except. I could not save the credentials. like in other google products we pickle the creds.
with open(f'account_data/{account_name}.pickle','wb') as stream:
pickle.dump(account, stream)
but I error as
AttributeError: Can't pickle local object 'OAuth2Session.__init__.<locals>.<lambda>'
I need to store multiple user keys and do some tasks. If you have any other module then tell me.
I figured it out myself.
from O365 import Account, MSGraphProtocol, message, FileSystemTokenBackend
def new_account(account_name):
account = Account(credentials, scopes=scopes, )
token_backend = FileSystemTokenBackend(token_path='account_data', token_filename=f'{account_name}.txt')
account.con.token_backend = token_backend
account.authenticate()
account.con.token_backend.save_token()
def load_account(account_name):
account = Account(credentials, scopes=scopes, )
token_backend = FileSystemTokenBackend(token_path='account_data', token_filename=f'{account_name}.txt')
account.con.token_backend = token_backend
account.con.token_backend.load_token()
if account.con.refresh_token():
return account
Hi there I'm new in python.
I would like to implement the listener on my Firebase DB.
When I change one or more parameters on the DB my Python code have to do something.
How can I do it?
Thank a lot
my db is like simple list of data from 001 to 200:
"remote-controller"
001 -> 000
002 -> 020
003 -> 230
my code is:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://remote-controller.firebaseio.com/', None)
result = firebase.get('003', None)
print result
It looks like this is supported now (october 2018): although it's not documented in the 'Retrieving Data' guide, you can find the needed functionality in the API reference. I tested it and it works like this:
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
firebase_admin.db.reference('my/data/path').listen(listener)
As Peter Haddad suggested, you should use Pyrebase for achieving something like that given that the python SDK still does not support realtime event listeners.
import pyrebase
config = {
"apiKey": "apiKey",
"authDomain": "projectId.firebaseapp.com",
"databaseURL": "https://databaseName.firebaseio.com",
"storageBucket": "projectId.appspot.com"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
def stream_handler(message):
print(message["event"]) # put
print(message["path"]) # /-K7yGTTEp7O549EzTYtI
print(message["data"]) # {'title': 'Pyrebase', "body": "etc..."}
my_stream = db.child("posts").stream(stream_handler)
If Anybody wants to create multiple listener using same listener function and want to get more info about triggered node, One can do like this.
Normal Listener function will get a Event object it has only Data, Node Name, Event type. If you add multiple listener and You want to differentiate between the data change. You can write your own class and add some info to it while creating object.
class ListenerClass:
def __init__(self, appname):
self.appname = appname
def listener(self, event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
print(self.appname) # Extra data related to change add your own member variable
Creating Objects:
listenerObject = ListenerClass(my_app_name + '1')
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
Full Code:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Initialising Database with credentials
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
# Create Objects Here, You can use loops and create many listener, But listener will create thread per every listener, Don't create irrelevant listeners. It won't work if you are running on machine with thread constraint
listenerObject = ListenerClass(my_app_name + '1') # Decide your own parameters, How you want to differentiate. Depends on you
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
As you can see on the per-language feature chart on the Firebase Admin SDK home page, Python and Go currently don't have realtime event listeners. If you need that on your backend, you'll have to use the node.js or Java SDKs.
You can use Pyrebase, which is a python wrapper for the Firebase API.
more info here:
https://github.com/thisbejim/Pyrebase
To retrieve data you need to use val(), example:
users = db.child("users").get()
print(users.val())
Python Firebase Realtime Listener Full Code :
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
db.reference('PatientMonitoring', app= obj).listen(listener)
Output:
put
/
{'n0': '40', 'n1': '71'} # for first time its gonna fetch the data from path whether data is changed or not
put # On data changed
/n1
725
put # On data changed
/n0
401