I am trying to use the pub sub service on my python application. When I am running the code it get stuck on the last publisher line for some reason and the code never end. The subscriber seems fine.Does someone know what is wrong with my code?
Publisher:
import os
from google.cloud import pubsub_v1
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
publisher = pubsub_v1.PublisherClient()
topic_path = 'projects/PROJECT_NAME/topics/TOPIC_NAME'
# simple garbage text to check if it's working
data = 'A garden sensor is ready!'
data = data.encode('utf-8')
attributes = {
'sensorName': 'garden-001',
'temperature': '75.0',
'humidity': '60'
}
future = publisher.publish(topic_path, data, **attributes)
print(f'published message id {future.result()}') # here it is just waiting forever
Subscriber:
import os
from google.cloud import pubsub_v1
from concurrent.futures import TimeoutError
credentials_path = 'PATH/TO/THE/KEY.JSON'
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
subscriber = pubsub_v1.SubscriberClient()
subscription_path = 'projects/PROJECT_NAME/subscriptions/SUBSCRIPTION_NAME'
def callback(message):
print(f'Received message: {message}')
print(f'data: {message.data}')
if message.attributes:
print("Attributes:")
for key in message.attributes:
value = message.attributes.get(key)
print(f"{key}: {value}")
message.ack()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
# wrap subscriber in a 'with' block to automatically call close() when done
with subscriber:
try:
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel()
# block until the shutdown is complete
streaming_pull_future.result()
Google provides decent documentation for using its services including Pub/Sub including a basic Python example that would have helped you avoid your problem.
Aside: your publisher and subscriber snippets set GOOGLE_APPLICATION_CREDENTIALS statically within the code. Don't do this! Set the environment variable before running the code. This way, you can revise the value without changing the code but, more importantly, the value can be set by the runtime e.g. Compute Engine.
Here's a working example based on your code using Application Default Credentials obtained from the environment:
Q="74535931"
BILLING="[YOUR-BILLING-ID]"
PROJECT="$(whoami)-$(date %y%m%d)-${Q}"
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable pubsub.googleapis.com \
--project=${PROJECT}
ACCOUNT=tester
EMAIL=${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL}
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/pubsub.editor
export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json
export PROJECT
export PUB="pub"
export SUB="sub"
gcloud pubsub topics create ${PUB} \
--project=${PROJECT}
gcloud pubsub subscriptions create ${SUB} \
--topic=${PUB} \
--project=${PROJECT}
publish.py:
import os
from google.cloud import pubsub_v1
project = os.getenv("PROJECT")
topic = os.getenv("PUB")
topic_path = f"projects/{project}/topics/{topic}"
data = 'A garden sensor is ready!'
data = data.encode('utf-8')
attributes = {
'sensorName': 'garden-001',
'temperature': '75.0',
'humidity': '60'
}
publisher = pubsub_v1.PublisherClient()
future = publisher.publish(topic_path, data, **attributes)
print(f'published message id {future.result()}')
subscribe.py:
import os
from google.cloud import pubsub_v1
from concurrent.futures import TimeoutError
project=os.getenv("PROJECT")
subscription=os.getenv("SUB")
subscription_path = f"projects/{project}/subscriptions/{subscription}"
def callback(message):
print(f'Received message: {message}')
print(f'data: {message.data}')
if message.attributes:
print("Attributes:")
for key in message.attributes:
value = message.attributes.get(key)
print(f"{key}: {value}")
message.ack()
subscriber = pubsub_v1.SubscriberClient()
streaming_pull_future = subscriber.subscribe(
subscription_path, callback=callback)
print(f'Listening for messages on {subscription_path}')
with subscriber:
try:
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel()
# block until the shutdown is complete
streaming_pull_future.result()
Run python3 subscribe.py:
python3 subscribe.py
Listening for messages on projects/{project}/subscriptions/{sub}
Received message: Message {
data: b'A garden sensor is ready!'
ordering_key: ''
attributes: {
"humidity": "60",
"sensorName": "garden-001",
"temperature": "75.0"
}
}
data: b'A garden sensor is ready!'
Attributes:
humidity: 60
temperature: 75.0
sensorName: garden-001
And in a separate window python3 publish.py:
python3 publish.py
published message id 1234567890123456
Related
I'm attempting to write a GCP Cloud Function in Python that calls the API for creating an IoT device. The initial challenge seems to be getting the appropriate module (specifically iot_v1) loaded within Cloud Functions so that it can make the call.
Example Python code from Google is located at https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/iot/api-client/manager/manager.py. The specific call desired is shown in "create_es_device". Trying to repurpose that into a Cloud Function (code below) errors out with "ImportError: cannot import name 'iot_v1' from 'google.cloud' (unknown location)"
Any thoughts?
import base64
import logging
import json
import datetime
from google.auth import compute_engine
from apiclient import discovery
from google.cloud import iot_v1
def handle_notification(event, context):
#Triggered from a message on a Cloud Pub/Sub topic.
#Args:
# event (dict): Event payload.
# context (google.cloud.functions.Context): Metadata for the event.
#
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
logging.info('New device registration info: {}'.format(pubsub_message))
certData = json.loads(pubsub_message)['certs']
deviceID = certData['device-id']
certKey = certData['certificate']
projectID = certData['project-id']
cloudRegion = certData['cloud-region']
registryID = certData['registry-id']
newDevice = create_device(projectID, cloudRegion, registryID, deviceID, certKey)
logging.info('New device: {}'.format(newDevice))
def create_device(project_id, cloud_region, registry_id, device_id, public_key):
# from https://cloud.google.com/iot/docs/how-tos/devices#api_1
client = iot_v1.DeviceManagerClient()
parent = client.registry_path(project_id, cloud_region, registry_id)
# Note: You can have multiple credentials associated with a device.
device_template = {
#'id': device_id,
'id' : 'testing_device',
'credentials': [{
'public_key': {
'format': 'ES256_PEM',
'key': public_key
}
}]
}
return client.create_device(parent, device_template)
You need to have the google-cloud-iot project listed in your requirements.txt file.
See https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/iot/api-client/manager/requirements.txt
With the python google-cloud-pubsub library, acknowledging messages through the subscriber.acknowledge() does not acknowledge my messages. My ack deadline is set at 30 seconds.
Here is my code:
from google.cloud import pubsub_v1
project_id = "$$$$"
subscription_name = "$$$$"
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_name)
response = subscriber.pull(subscription_path, max_messages=10, timeout=15)
for msg in response.received_messages:
subscriber.acknowledge(subscription=subscription_path, ack_ids=[msg.ack_id])
Using google-cloud-pubsub==1.0.2
Any idea of what I'm doing wrong?
What I recommend you is referring to Synchronous Pull documentation, then run a sample code in Python to pull and acknowledge messages:
from google.cloud import pubsub_v1
project_id = "Your Google Cloud Project ID"
TODO subscription_name = "Your Pub/Sub subscription name"
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(
project_id, subscription_name)
NUM_MESSAGES = 3
response = subscriber.pull(subscription_path, max_messages=NUM_MESSAGES)
ack_ids = []
for received_message in response.received_messages:
print("Received: {}".format(received_message.message.data))
ack_ids.append(received_message.ack_id)
subscriber.acknowledge(subscription_path, ack_ids)
print('Received and acknowledged {} messages. Done.'.format(
len(response.received_messages)))
I can't find definition of ack_ids = [] in your code (you need to define it before starting use it in code). If you will see positive results when running that piece of code, you can assume that there is a bug in your code. Have you provided a full code?
I am using the following azurerm function in my code:
public_ips = azurerm.get_vmss_public_ips(access_token, SUBSCRIPTION_ID,
GROUP_NAME, CUScaleSet)
print(public_ips)
I am getting the following output:
{u'error': {u'message': u"No registered resource provider found for
location 'eastus' and API version '2019-03-01' for type
'virtualMachineScaleSets/publicIPAddresses'. The supported
api-versions are '2017-03-30, 2017-12-01, 2018-04-01, 2018-06-01,
2018-10-01'. The supported locations are 'eastus, eastus2, westus,
centralus, northcentralus, southcentralus, northeurope, westeurope,
eastasia, southeastasia, japaneast, japanwest, australiaeast,
australiasoutheast, australiacentral, brazilsouth, southindia,
centralindia, westindia, canadacentral, canadaeast, westus2,
westcentralus, uksouth, ukwest, koreacentral, koreasouth,
francecentral, southafricanorth, uaenorth'.", u'code':
u'NoRegisteredProviderFound'}}
NOTE: The same piece of code was running a few days ago.
If the requirement is to fetch all the IPs of the VMs in the VMSS instance, you can use the official Azure SDK for Python as follows:
# Imports
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.network import NetworkManagementClient
# Set subscription ID
SUBSCRIPTION_ID = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
def get_credentials():
credentials = ServicePrincipalCredentials(
client_id='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
secret='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
tenant='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
)
return credentials
# Get credentials
credentials = get_credentials()
# Initialize management client
network_client = NetworkManagementClient(
credentials,
SUBSCRIPTION_ID
)
def get_vmss_vm_ips():
# List all network interfaces of the VMSS instance
vmss_nics = network_client.network_interfaces.list_virtual_machine_scale_set_network_interfaces(
"<VMSS Resource group name>", "<VMSS instance name>")
niclist = [nic.serialize() for nic in vmss_nics]
print "IP addresses in the given VM Scale Set:"
for nic in niclist:
ipconf = nic['properties']['ipConfigurations']
for ip in ipconf:
print ip['properties']['privateIPAddress']
# Get all IPs of VMs in VMSS
get_vmss_vm_ips()
References:
NetworkInterfacesOperations class
list_virtual_machine_scale_set_network_interfaces() method
Hope this helps!
I am trying to get messages from a topic on a message hub on bluemix using Confluent Kafka Python. My code is found below, but something is not working. The topic and the message hub is up and running, so there is probably something with the code.
from confluent_kafka import Producer, KafkaError, Consumer
consumer_settings = {
'bootstrap.servers': 'broker-url-here',
'group.id': 'mygroup',
'default.topic.config': {'auto.offset.reset': 'smallest'},
'sasl.mechanisms': 'PLAIN',
'security.protocol': 'ssl',
'sasl.username': 'username-here',
'sasl.password': 'password-here',
}
c = Consumer(**consumer_settings)
c.subscribe(['topic-here'])
running = True
while running:
msg = c.poll()
if msg.error():
print("Error while retrieving message")
c.close()
sys.exit(10)
elif (msg is not None):
for x in msg:
print(x)
else:
sys.exit(10)
When I run the code, it seems to get stuck at msg = c.poll(). So I guess it is either failing to connect, or failing to retrieve messages. The credentials themselves are correct.
The consume logic look fine but the configuration for the consumer is incorrect.
security.protocol needs to be set to sasl_ssl
ssl.ca.location needs to point to a PEM file containing trusted certificates. The location of that file varies for each OS, but for the most common it's:
Bluemix/Ubuntu: /etc/ssl/certs
Red Hat: /etc/pki/tls/cert.pem
macOS: /etc/ssl/certs.pem
We also have a sample app using this client that can easily be started or deployed to Bluemix: https://github.com/ibm-messaging/message-hub-samples/tree/master/kafka-python-console-sample
I am not receiving any messages in my SQS queue when subscribing to an SNS topic via boto3.
Is this an issue with the code or the API credentials I am using? The IAM policy associated with this account has AWS PowerUser privileges, which should mean it has unrestricted access to manage SNS topics and SQS queues.
When I create the equivalent structure through the AWS console (create topic, create queue, subscribe queue to topic) and send a message using either boto3, the AWS CLI, or the AWS console, the message comes through correctly.
I don't think it is an issue with the code because the SubscriptionArn is being returned correctly?
I have tried this with both the US-EAST-1 and AP-SE-1 regions, same result.
Sample code:
#!/usr/bin/env python3
import boto3
import json
def get_sqs_msgs_from_sns():
sqs_client = boto3.client('sqs', region_name='us-east-1')
sqs_obj = boto3.resource('sqs', region_name='us-east-1')
sns_client = boto3.client('sns', region_name='us-east-1')
sqs_queue_name = 'queue1'
topic_name = 'topic1'
# Create/Get Queue
sqs_client.create_queue(QueueName=sqs_queue_name)
sqs_queue = sqs_obj.get_queue_by_name(QueueName=sqs_queue_name)
queue_url = sqs_client.get_queue_url(QueueName=sqs_queue_name)['QueueUrl']
sqs_queue_attrs = sqs_client.get_queue_attributes(QueueUrl=queue_url,
AttributeNames=['All'])['Attributes']
sqs_queue_arn = sqs_queue_attrs['QueueArn']
if ':sqs.' in sqs_queue_arn:
sqs_queue_arn = sqs_queue_arn.replace(':sqs.', ':')
# Create SNS Topic
topic_res = sns_client.create_topic(Name=topic_name)
sns_topic_arn = topic_res['TopicArn']
# Subscribe SQS queue to SNS
sns_client.subscribe(
TopicArn=sns_topic_arn,
Protocol='sqs',
Endpoint=sqs_queue_arn
)
# Publish SNS Messages
test_msg = {'default': {"x":"foo","y":"bar"}}
test_msg_body = json.dumps(test_msg)
sns_client.publish(
TopicArn=sns_topic_arn,
Message=json.dumps({'default': test_msg_body}),
MessageStructure='json')
# Validate Message
sqs_msgs = sqs_queue.receive_messages(
AttributeNames=['All'],
MessageAttributeNames=['All'],
VisibilityTimeout=15,
WaitTimeSeconds=20,
MaxNumberOfMessages=5
)
assert len(sqs_msgs) == 1
assert sqs_msgs[0].body == test_msg_body
print(sqs_msgs[0].body) # This should output dict with keys Message, Type, Timestamp, etc., but only returns the test_msg
if __name__ == "__main__":
get_mock_sqs_msgs_from_sns()
I receive this output:
$ python .\sns-test.py
Traceback (most recent call last):
File ".\sns-test.py", line 55, in <module>
get_sqs_msgs_from_sns()
File ".\sns-test.py", line 50, in get_sqs_msgs_from_sns
assert len(sqs_msgs) == 1
AssertionError
The URL above for the similar question posed for the C# AWS SDK put me in the correct direction for this: I needed to attach a policy to the SQS queue to allow the SNS topic to write to it.
def allow_sns_to_write_to_sqs(topicarn, queuearn):
policy_document = """{{
"Version":"2012-10-17",
"Statement":[
{{
"Sid":"MyPolicy",
"Effect":"Allow",
"Principal" : {{"AWS" : "*"}},
"Action":"SQS:SendMessage",
"Resource": "{}",
"Condition":{{
"ArnEquals":{{
"aws:SourceArn": "{}"
}}
}}
}}
]
}}""".format(queuearn, topicarn)
return policy_document
and
policy_json = allow_sns_to_write_to_sqs(topic_arn, queue_arn)
response = sqs_client.set_queue_attributes(
QueueUrl = queue_url,
Attributes = {
'Policy' : policy_json
}
)
print(response)