I am creating an SQS queue and adding permissions to that queue for a specific user inline, but the add_permission call is failing in an unexpected way.
import boto3
queue_name = 'test-queue'
sqs_client = boto3.client('sqs')
response = sqs_client.create_queue(QueueName=queue_name)
queue = boto3.resource('sqs').Queue(response.get('QueueUrl')
queue.add_permission(
Label='TestReceivePermissions',
AWSAccountIds=['arn:aws:iam::<account_id>:user/test_user'],
Actions=['ReceiveMessage']
)
When I execute this code, I receive the following error:
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the AddPermission operation: Value [arn:aws:iam::<account_id>:user/test_user] for parameter PrincipalId is invalid. Reason: Unable to verify.
However, I am able to add this same permission to the queue via the AWS console. What am I missing here? Is there another approach I should consider?
UPDATE: One viable workaround is to create a managed policy and then attach the user to the policy like so:
import json, boto3
sqs = boto3.resource('sqs')
iam = boto3.resource('iam')
queue_name = 'test-queue'
USER_NAME = 'test_user'
queue = sqs.create_queue(QueueName=queue_name)
policy = iam.create_policy(
PolicyName='{}-ReceiveAccess'.format(queue_name),
PolicyDocument=json.dumps({
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': [
'sqs:ReceiveMessage*'
],
'Resource': [queue.attributes['QueueArn']]
}
]
})
)
policy.attach_user(UserName=USER_NAME)
This works for now, but I still do not understand why the other approach was not working.
Related
i'm trying to send an spl-token transaction but it's not working, it's a transacion on the devnet with a token that i newly created, my code is
from spl.token.instructions import transfer, TransferParams
from spl.token.client import Client
from solana.publickey import PublicKey
from solana.transaction import Transaction
from solana.keypair import Keypair
address = "address sending from"
my_token = "3hALJzSz2bx8gxgrHg7EQQtdiHxG7d7LNswxVMXrUApw" #token addredd on devnet
private_key= "64 bit key"
def send_payouts_spl(dest,amount):
source = address
transfer_params= TransferParams(
amount=amount,
dest=PublicKey(dest),
owner=PublicKey(source),
program_id=PublicKey(my_token),
source=PublicKey(source)
)
txn = Transaction()
txn.add(transfer(transfer_params))
solana_client = Client("https://api.devnet.solana.com")
owner = Keypair.from_secret_key(private_key)
tx_id = solana_client.send_transaction(txn, owner)
return tx_id
and also the error that i'm getting
solana.rpc.core.RPCException: {'code': -32002, 'message': 'Transaction simulation failed: This program may not be used for executing instructions', 'data': {'accounts': None, 'err': 'InvalidProgramForExecution', 'logs': [], 'unitsConsumed': 0}}
also if it helps, my devnet token address and my devnet address are
3hALJzSz2bx8gxgrHg7EQQtdiHxG7d7LNswxVMXrUApw, EckcvMCmpkKwF4hDhWxq8cm4qy8JBkb2vBVQDu4WvxmM respectively
In Solana, there's typically just one token program that is shared for all token types (mints).
When you provide the program_id, you shouldn't provide the address for your mint, but rather the id for the SPL Token Program, which is TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA. Also, the owner should be the key that owns the token account.
So instead, try:
owner = Keypair.from_secret_key(private_key)
transfer_params= TransferParams(
amount=amount,
dest=PublicKey(dest),
owner=PublicKey(owner.public_key),
program_id=PublicKey('TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA'),
source=PublicKey(source)
)
I am currently using python GCP API to create a cloud task queue. My codes are modified from the sample code and the logic is to check if the queue exists or not, if not create a new queue and put new task to that queue. so I use try-except and import from google.api_core import exceptions to handle the error. But the problem right now it keeps saying that my service account don't have permission cloud task. Here is the error.
google.api_core.exceptions.PermissionDenied
google.api_core.exceptions.PermissionDenied: 403 The principal (user or service account) lacks IAM permission "cloudtasks.tasks.create" for the resource "projects/xxxx/locations/us-central1" (or the resource may not exist).
here is my code.
#app.route('/train_model/<dataset_name>/<dataset_id>/', methods=["POST", "GET"])
def train_model(dataset_name,dataset_id):
if request.method == 'POST':
form = request.form
model = form.get('model_name')
date = form.get('date')
datetime_object = datetime.strptime(date, '%Y-%m-%d %H:%M:%S')
timezone = pytz.timezone('Asia/Hong_Kong')
timezone_date_time_obj = timezone.localize(datetime_object)
data=[dataset_id,model]
payload = str(data).encode()
# Create a client.
url = "https://us-central1-xxx.cloudfunctions.net/create_csv"
try:
client = tasks_v2.CloudTasksClient.from_service_account_json(
'./xxxxx.json')
url = "https://us-central1-xxxxxx.cloudfunctions.net/create_csv"
location = 'us-central1'
project = 'xxxxx'
queue = 'testing1'
parent = client.location_path(project, location)
task = {
"http_request": {
'http_method': 'POST',
'url': url,
'body': payload
}}
# set schedule time
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(timezone_date_time_obj)
task['schedule_time'] = timestamp
response = client.create_task(parent, task)
except exceptions.FailedPrecondition:
location = 'us-central1'
project = 397901391776
# Create a client.
client = tasks_v2.CloudTasksClient.from_service_account_json(
"./xxxx.json")
parent = client.location_path(project, location)
queue = {"name": 'x'}
queue.update(name="projects/xxxxx/locations/us-west2/queues/" + queue #the name of the queue from try.)
response = client.create_queue(parent, queue)
parent = client.queue_path(project, location, queue)
task = {
"http_request": {
'http_method': 'POST',
'url': url,
'body':payload
}}
# set schedule time
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(timezone_date_time_obj)
task['schedule_time'] = timestamp
response = client.create_task(parent, task)
print(response)
return redirect('/datasetinfo/{}/{}/'.format(dataset_name,dataset_id))
the permission of my service account
I have reproduced your scenario and I managed to get the same issue. The problem is not with the authentication but that the resource doesn't exist.
In order to get the resource path, instead of using the function location_path you should use queue_path. This way, the variable parent will contain the queue's name and the call create_task will be able to find the resource.
Finally, giving the role Editor to a service account may be too much, you should restrict the access to the minimum viable. If this code only needs to create tasks, you should create a custom role with just the required permissions, cloudtasks.tasks.create in this case.
The code I've written seems to be what I need, however it doesn't work and I get a 401 error (authentication) I've tried everything: 1. Service account permissions 2. create secret id and key (not sure how to use those to get access token though) 3. Basically, tried everything for the past 2 days.
import requests
from google.oauth2 import service_account
METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
SERVICE_ACCOUNT = [NAME-OF-SERVICE-ACCOUNT-USED-WITH-CLOUD-FUNCTION-WHICH-HAS-COMPUTE-ADMIN-PRIVILEGES]
def get_access_token():
url = '{}instance/service-accounts/{}/token'.format(
METADATA_URL, SERVICE_ACCOUNT)
# Request an access token from the metadata server.
r = requests.get(url, headers=METADATA_HEADERS)
r.raise_for_status()
# Extract the access token from the response.
access_token = r.json()['access_token']
return access_token
def start_vms(request):
request_json = request.get_json(silent=True)
request_args = request.args
if request_json and 'number_of_instances_to_create' in request_json:
number_of_instances_to_create = request_json['number_of_instances_to_create']
elif request_args and 'number_of_instances_to_create' in request_args:
number_of_instances_to_create = request_args['number_of_instances_to_create']
else:
number_of_instances_to_create = 0
access_token = get_access_token()
address = "https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/zones/europe-west2-b/instances?sourceInstanceTemplate=https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/global/instanceTemplates/[MY-INSTANCE-TEMPLATE]"
headers = {'token': '{}'.format(access_token)}
for i in range(1,number_of_instances_to_create):
data = {'name': 'my-instance-{}'.format(i)}
r = requests.post(address, data=data, headers=headers)
r.raise_for_status()
print("my-instance-{} created".format(i))
Any advice/guidance? If someone could tell me how to get an access token using secret Id and key. Also, I'm not too sure if OAuth2.0 will work because I essentially want to turn these machines on, and they do some processing and then self destruct. So there is no user involvement to allow access. If OAuth2.0 is the wrong way to go about it, what else can I use?
I tried using gcloud, but subprocess'ing gcloud commands aren't recommended.
I did something similar to this, though I used the Node 10 Firebase Functions runtime, but should be very similar never-the-less.
I agree that OAuth is not the correct solution since there is no user involved.
What you need to use is 'Application Default Credentials' which is based on the permissions available to your cloud functions' default service account which will be the one labelled as "App Engine default service account" here:
https://console.cloud.google.com/iam-admin/serviceaccounts?folder=&organizationId=&project=[YOUR_PROJECT_ID]
(For my project that service account already had the permissions necessary for starting and stopping GCE instances, but for other API's I have grant it permissions manually.)
ADC is for server-to-server API calls. To use it I called google.auth.getClient (of the Google APIs Auth Library) with just the scope, ie. "https://www.googleapis.com/auth/cloud-platform".
This API is very versatile in that it returns whatever credentials you need, so when I am running on cloud functions it returns a 'Compute' object and when I'm running in the emulator it gives me a "UserRefreshClient" object.
I then include that auth object in my call to compute.instances.insert() and compute.instances.stop().
Here the template I used for testing my code...
{
name: 'base',
description: 'Temporary instance used for testing.',
tags: { items: [ 'test' ] },
machineType: `zones/${zone}/machineTypes/n1-standard-1`,
disks: [
{
autoDelete: true, // you will want this!
boot: true,
type: 'PERSISTENT',
initializeParams: {
diskSizeGb: '10',
sourceImage: "projects/ubuntu-os-cloud/global/images/ubuntu-minimal-1804-bionic-v20190628",
}
}
],
networkInterfaces: [
{
network: `https://www.googleapis.com/compute/v1/projects/${projectId}/global/networks/default`,
accessConfigs: [
{
name: 'External NAT',
type: 'ONE_TO_ONE_NAT'
}
]
}
],
}
Hope that helps.
If you’re getting a 401 error that means that the access token you're using is either expired or invalid.
This guide will be able to show you how to request OAuth 2.0 access tokens and make API calls using a Service Account: https://developers.google.com/identity/protocols/OAuth2ServiceAccount
The .json file mentioned is the private key you create in IAM & Admin under your service account.
I am not receiving any messages in my SQS queue when subscribing to an SNS topic via boto3.
Is this an issue with the code or the API credentials I am using? The IAM policy associated with this account has AWS PowerUser privileges, which should mean it has unrestricted access to manage SNS topics and SQS queues.
When I create the equivalent structure through the AWS console (create topic, create queue, subscribe queue to topic) and send a message using either boto3, the AWS CLI, or the AWS console, the message comes through correctly.
I don't think it is an issue with the code because the SubscriptionArn is being returned correctly?
I have tried this with both the US-EAST-1 and AP-SE-1 regions, same result.
Sample code:
#!/usr/bin/env python3
import boto3
import json
def get_sqs_msgs_from_sns():
sqs_client = boto3.client('sqs', region_name='us-east-1')
sqs_obj = boto3.resource('sqs', region_name='us-east-1')
sns_client = boto3.client('sns', region_name='us-east-1')
sqs_queue_name = 'queue1'
topic_name = 'topic1'
# Create/Get Queue
sqs_client.create_queue(QueueName=sqs_queue_name)
sqs_queue = sqs_obj.get_queue_by_name(QueueName=sqs_queue_name)
queue_url = sqs_client.get_queue_url(QueueName=sqs_queue_name)['QueueUrl']
sqs_queue_attrs = sqs_client.get_queue_attributes(QueueUrl=queue_url,
AttributeNames=['All'])['Attributes']
sqs_queue_arn = sqs_queue_attrs['QueueArn']
if ':sqs.' in sqs_queue_arn:
sqs_queue_arn = sqs_queue_arn.replace(':sqs.', ':')
# Create SNS Topic
topic_res = sns_client.create_topic(Name=topic_name)
sns_topic_arn = topic_res['TopicArn']
# Subscribe SQS queue to SNS
sns_client.subscribe(
TopicArn=sns_topic_arn,
Protocol='sqs',
Endpoint=sqs_queue_arn
)
# Publish SNS Messages
test_msg = {'default': {"x":"foo","y":"bar"}}
test_msg_body = json.dumps(test_msg)
sns_client.publish(
TopicArn=sns_topic_arn,
Message=json.dumps({'default': test_msg_body}),
MessageStructure='json')
# Validate Message
sqs_msgs = sqs_queue.receive_messages(
AttributeNames=['All'],
MessageAttributeNames=['All'],
VisibilityTimeout=15,
WaitTimeSeconds=20,
MaxNumberOfMessages=5
)
assert len(sqs_msgs) == 1
assert sqs_msgs[0].body == test_msg_body
print(sqs_msgs[0].body) # This should output dict with keys Message, Type, Timestamp, etc., but only returns the test_msg
if __name__ == "__main__":
get_mock_sqs_msgs_from_sns()
I receive this output:
$ python .\sns-test.py
Traceback (most recent call last):
File ".\sns-test.py", line 55, in <module>
get_sqs_msgs_from_sns()
File ".\sns-test.py", line 50, in get_sqs_msgs_from_sns
assert len(sqs_msgs) == 1
AssertionError
The URL above for the similar question posed for the C# AWS SDK put me in the correct direction for this: I needed to attach a policy to the SQS queue to allow the SNS topic to write to it.
def allow_sns_to_write_to_sqs(topicarn, queuearn):
policy_document = """{{
"Version":"2012-10-17",
"Statement":[
{{
"Sid":"MyPolicy",
"Effect":"Allow",
"Principal" : {{"AWS" : "*"}},
"Action":"SQS:SendMessage",
"Resource": "{}",
"Condition":{{
"ArnEquals":{{
"aws:SourceArn": "{}"
}}
}}
}}
]
}}""".format(queuearn, topicarn)
return policy_document
and
policy_json = allow_sns_to_write_to_sqs(topic_arn, queue_arn)
response = sqs_client.set_queue_attributes(
QueueUrl = queue_url,
Attributes = {
'Policy' : policy_json
}
)
print(response)
i'm sending apple push notifications via AWS SNS via Lambda with Boto3 and Python.
from __future__ import print_function
import boto3
def lambda_handler(event, context):
client = boto3.client('sns')
for record in event['Records']:
if record['eventName'] == 'INSERT':
rec = record['dynamodb']['NewImage']
competitors = rec['competitors']['L']
for competitor in competitors:
if competitor['M']['confirmed']['BOOL'] == False:
endpoints = competitor['M']['endpoints']['L']
for endpoint in endpoints:
print(endpoint['S'])
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
Everything works fine! But when an endpoint is invalid for some reason (at the moment this happens everytime i run a development build on my device, since i get a different endpoint then. This will be either not found or deactivated.) the Lambda function fails and gets called all over again. In this particular case if for example the second endpoint fails it will send the push over and over again to endpoint 1 to infinity.
Is it possible to ignore invalid endpoints and just keep going with the function?
Thank you
Edit:
Thanks to your help i was able to solve it with:
try:
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
except Exception as e:
print(e)
continue
Aws lamdba on failure retries the function till the event expires from the stream.
In your case since the exception on the 2nd endpoint is not handled, the retry mechanism ensures the reexecution of post to the first endpoint.
If you handle the exception and ensure the function successfully ends even when there is a failure, then the retries will not happen.