I am trying to add isp(in skill purchase) to my Alexa Skill. The skill code is written in python and in the Launch request handler i have written the following code:
locale = handler_input.request_envelope.request.locale
monetization_service = handler_input.service_client_factory.get_monetization_service()
product_response = monetization_service.get_in_skill_products(locale)
if isinstance(product_response, InSkillProductsResponse):
in_skill_product_list = product_response.in_skill_products
self._logger.info(in_skill_product_list)
When I am running my lambda though I am getting the following error:
Attempting to use service client factory with no configured API client
Has anybody faced this issue let me know what is it am doing incorrectly?
While initializing the skillbuilder i was using
sb = SkillBuilder()
This SkillBuilder does not have APIClient configured. Instead changing it to
sb = StandardSkillBuilder()
works as it has ApiClient configured.
Related
I'm currently struggling with Azure Function logs.
I'm trying to redirect all the default logs to the app insight, including metrics and so on.
Here is my architecture:
an App Insight, where the Local authentication is disabled
an azure function, which is "Monitoring Metrics Publisher" on the App Insight
I found the following code on the Azure Microsoft documentation :
from azure.identity import ManagedIdentityCredential
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace.samplers import ProbabilitySampler
from opencensus.trace.tracer import Tracer
credential = ManagedIdentityCredential()
tracer = Tracer(
exporter=AzureExporter(credential=credential, connection_string="InstrumentationKey=<your-instrumentation-key>;IngestionEndpoint=<your-ingestion-endpoint>"),
sampler=ProbabilitySampler(1.0)
)
It let me create a tracer. But it does not redirect any metrics.
The second code I found let me add a handler, but without including the ManagedIdentityCredential.
import logging
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger(__name__)
# TODO: replace the all-zero GUID with your instrumentation key.
logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
I try to merge both snippets of code. The function is running but it doesn't send any metrics.
My goal is to get all azure logs I get before disabling the app insight Local authentication.
How can I achieve this?
Thank you very much for your help.
I'm using the Firebase Admin Python SDK to read/write data to Firestore. I've created a service account with the necessary permissions and saved the credentials .json file in the source code (I know this isn't the most secure, but I want to get the thing running before fixing security issues). When testing the integration locally, it works flawlessly. But after deploying to GCP, where our service is hosted, calls to Firestore don't work properly and retry for a while before throwing 503 Deadline Exceeded errors. However, SSHing into a GKE pod and calling the SDK manually works without issues. It's just when the SDK is used in code flow that causes problems.
Our service runs in Google Kubernetes Engine in one project (call it Project A), but the Firestore database is in another project (call it project B). The service account that I'm trying to use is owned by Project B, so it should still be able to access the database even when it is being initialized from inside Project A.
Here's how I'm initiating the SDK:
from firebase_admin import get_app
from firebase_admin import initialize_app
from firebase_admin.credentials import Certificate
from firebase_admin.firestore import client
from google.api_core.exceptions import AlreadyExists
credentials = Certificate("/path/to/credentials.json")
try:
app = initialize_app(credential=credentials, name="app_name")
except ValueError:
app = get_app(name="app_name")
client = client(app=app)
Another wrinkle is that another part of our code is able to successfully use the same service account to produce Firebase Access Tokens. The successful code is:
import firebase_admin
from firebase_admin import auth as firebase_admin_auth
if "app_name" in firebase_admin._apps:
# Already initialized
app = firebase_admin.get_app(name="app_name")
else:
# Initialize
credentials = firebase_admin.credentials.Certificate("/path/to/credentials.json")
app = firebase_admin.initialize_app(credential=credentials, name="app_name")
firebase_token = firebase_admin_auth.create_custom_token(
uid="id-of-user",
developer_claims={"admin": is_admin, "site_slugs": read_write_site_slugs},
app=app,
)
Any help appreciated.
Turns out that the problem here was a conflict between gunicorn's gevents and the SDK's use of gRCP. Something related to websockets. I found the solution here. I added the following code to our Django app's settings:
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()
I am trying to authenticate with the python SDK to pull Azure VNet data.
As a first step to verify that I can authenticate I am trying to use the subscription client to list subscriptions. I am creating a certificate credential to use for authentication.
When I make the call to list the subscriptions from the subscription client the call hangs seemingly indefinitely with no error returned. I am trying to authenticate to azure_gov. Here is the code:
import logging
import os
import boto3
from msrestazure.azure_cloud import AZURE_US_GOV_CLOUD as CLOUD
from azure.identity import CertificateCredential
from azure.mgmt.subscription import SubscriptionClient
# Setup logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logging.basicConfig(level=logging.INFO)
# Constants
CERT_PATH = '/tmp/cert.pem'
AZURE_CERT_PATH = '/tmp/cert.pem'
AZURE_TENANT_ID = os.environ['AZURE_TENANT_ID']
AZURE_CLIENT_ID = os.environ['AZURE_CLIENT_ID']
AZURE_SDK_S3_BUCKET = os.environ['AZURE_SDK_S3_BUCKET']
s3 = boto3.client('s3')
s3.download_file(AZURE_SDK_S3_BUCKET, 'certs/cert.pem', CERT_PATH)
# Setup Azure credentials
credential = CertificateCredential(
tenant_id=AZURE_TENANT_ID,
client_id=AZURE_CLIENT_ID,
certificate_path=AZURE_CERT_PATH,
authority=CLOUD.endpoints.active_directory)
logger.info(f'tenant_id = {AZURE_TENANT_ID}, client_id = {AZURE_CLIENT_ID}')
logger.info(f'CLOUD: {CLOUD}')
sub_client = SubscriptionClient(
credential=credential,
base_url=CLOUD.endpoints.resource_manager)
#Code times out here
subscription = next(sub_client.subscriptions.list())
logger.info(f'Fetched subscription {subscription.subscription_id}')
I have verified multiple times that the cert, tenant_id, and client_id all match what I see in active directory.
I've found the following posts from Microsoft: first post and second post, which both use the azure.mgmt.resource SubscriptionClient which gives no attribute 'signed_session' in the CertificateCredential when trying to use a CertificateCredential to setup the client.
I have found the following adapter for using the CertificateCredential class with this client and tried using it but it also gives me the same timeout issue on the next(sub_client.subscriptions.list) call.
EDIT:
I am still seeing issues with this, when things completely time out after the max number of retries I get the following error:
Attempted credentials:
EnvironmentCredential: Authentication failed: <urllib3.connection.HTTPSConnection object at 0x7fad94f116d8>: Failed to establish a new connection: [Errno 110] Connection timed out
I don't think it is an environment issue as I can log into the Azure CLI from the same instance.
I'm just starting out with boto3 and lambda and was trying to run the below function via Pycharm.
import boto3
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dummy-mysql-rds'
)
But i receive the below error:
botocore.errorfactory.DBInstanceNotFoundFault: An error occurred (DBInstanceNotFound) when calling the StopDBInstance operation: DBInstance dummy-mysql-rds not found.
Do you know what may be causing this?
For the record, I have the AWS toolkit installed for Pycharm and can run simple functions to list and describe ec2 instances and my AWS profile has admin access.
By explicitly defining the profile name the below function now works via Pycharm. Thank you #OleksiiDonoha for your help in getting this resolved.
import boto3
rds = boto3.setup_default_session(profile_name='dev')
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dev-mysql-rds'
)
I'm trying to use Google Cloud's text to speech engine for my robot, and I cannot understand the reference page for passing the key explicitly in Python as mentioned here.
I spent several hours yesterday exploring different options on how to set the environment variable GOOGLE_APPLICATION_CREDENTIALS needed for implicit authorization including an export command in the shell script I use to start the robot, using os.environ commands in Python, and using os.system to call an export command.
client = texttospeech.TextToSpeechClient()
voice = robot_config.get('google_cloud', 'voice')
keyFile = robot_config.get('google_cloud', 'key_file')
hwNum = robot_config.getint('tts', 'hw_num')
languageCode = robot_config.get('google_cloud', 'language_code')
voice = texttospeech.types.VoiceSelectionParams(
name=voice,
language_code=languageCode
)
audio_config = texttospeech.types.AudioConfig(
audio_config=texttospeech.enums.AudioEncoding.LINEAR16
)
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = keyFile
Logging in via SSH shows that I have successfully set the environment variable since it shows up in env; however a DefaultCredentialsError is thrown with the following message
Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, pease see https://cloud.google.com/docs/authentication/getting-started
Logging in and setting the environment variable manually will allow the script to run and work, but this is not a long term solution.
This works for me:
import os
from google.cloud import texttospeech
os.environ ["GOOGLE_APPLICATION_CREDENTIALS"]= "/home/pi/projectx-17f8348743.json"
client=texttospeech.TextToSpeechClient()
The correct answer lies in the google.oath2 library. The client object is not looking for a json key, and is instead looking for a service account object.
from google.oath2 import service_account
from google.cloud import texttospeech
client = texttospeech.TextToSpeechClient(
credentials=service_account.Credentials.from_service_account_file(keyFile)
)