Having issues writing a unit test for S3 client, it seems the test is trying to use a real s3 client rather than the one i have created for the test here is my example
#pytest.fixture(autouse=True)
def moto_boto(self):
# setup: start moto server and create the bucket
mocks3 = mock_s3()
mocks3.start()
res = boto3.resource('s3')
bucket_name: str = f"{os.environ['BUCKET_NAME']}"
res.create_bucket(Bucket=bucket_name)
yield
# teardown: stop moto server
mocks3.stop()
def test_with_fixture(self):
from functions.s3_upload_worker import (
save_email_in_bucket,
)
client = boto3.client('s3')
bucket_name: str = f"{os.environ['BUCKET_NAME']}"
client.list_objects(Bucket=bucket_name)
save_email_in_bucket(
"123AZT",
os.environ["BUCKET_FOLDER_NAME"],
email_byte_code,
)
This results in the following error
botocore.exceptions.ClientError: An error occurred (ExpiredToken) when calling the PutObject operation: The provided token has expired.
code i am testing looks like this
def save_email_in_bucket(message_id, bucket_folder_name, body):
s3_key = "".join([bucket_folder_name, "/", str(message_id), ".json"])
s3_client.put_object(
Bucket=bucket,
Key=s3_key,
Body=json.dumps(body),
ContentType="application-json",
)
LOGGER.info(
f"Saved email with messsage ID {message_id} in bucket folder {bucket_folder_name}"
)
Not accepting this an an answer but useful for anyone who ends up here, I found a workaround where if I create the s3 client in the function i am trying to test then this approach will work rather than create it globally. I would prefer to find an actual solution though.
Related
I am trying write some integration tests for the Lambda function which connects to an already created SQS Queue. I need to mock this SQS connection.
I am not sure if mocking will work since I use boto3 Lambda.invoke() along with SAM - sam local start-lambda to invoke the Lambda function in pytest function.
pytest sample code
def connect_to_lambda_client(running_locally: bool = True):
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="us-west-2",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=20,
retries={"max_attempts": 0},
),
)
else:
lambda_client = boto3.client("lambda")
return lambda_client
#mock_sqs
def test_lambda_function():
client = connect_to_lambda_client()
lambda_response = client.invoke(FunctionName="ListPersonFunction")
assert lambda_response.get("statusCode") == 200
sample lambda function
QUEUE_NAME = os.getenv('SQSLOGQUEUENAME')
client = boto3.resource('sqs')
queue = client.get_queue_by_name(QueueName=QUEUE_NAME) #raises ClientError
queue.send_message(
MessageBody=_record,
MessageGroupId=self.group_id,
MessageDeduplicationId=self.deduplication_id
)
Traceback of client error
[ERROR] ClientError: An error occurred (InvalidClientTokenId) when calling the GetQueueUrl operation: The secu
raise error_class(parsed_response, operation_name) _make_api_callarams)
Moto only mocks requests made within the Python-context that it is executed in. It has no way of knowing what happens within SAM.
You could use MotoServer in this case. It offers the same functionality as Moto, but runs as a standalone server.
To start the server:
pip install moto[server,all]
moto_server -H 0.0.0.0 -p 5000
Creating/retrieving the queue can be done by providing the endpoint URL, exactly how you also connect to SAM:
sqs_client = boto3.client(
"sqs",
region_name="us-west-2",
endpoint_url="http://127.0.0.1:5000",
)
Because this Server runs separately from everything, you can connect to it from within your Python-program (to create the queue) and from within SAM (to retrieve the queue).
I'm working on an internal S3 service (not AWS one). When I provide hard coded credentials, region and endpoint_url, boto3 seems to ignore them.
I came to that conclusion because it is attempting to go on internet (by using a public aws endpoint URL instead of the internal I have provided) but it does not work because of the following proxy error. But he should not go on internet, since it is an internal S3 service :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
Here is my code
import io
import os
import boto3
import pandas as pd
# Method 1 : Client #########################################
s3_client = boto3.client(
's3',
region_name='EU-WEST-1',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
endpoint_url='https://my_company_enpoint_url'
)
# ==> at this point no error, but I don't know the value of endpoint_url
# Read bucket
bucket = "bkt-udt-arch"
file_name = "banking.csv"
print("debug 1") # printed OK
obj = s3_client.get_object(Bucket= bucket, Key= file_name)
# program stops here :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
print("debug 2") # not printed -
initial_df = pd.read_csv(obj['Body']) # 'Body' is a key word
print("debug 3")
# Method 2 : Resource #########################################
# use third party object storage
s3 = boto3.resource('s3', endpoint_url='https://my_company_enpoint_url',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
region_name='EU-WEST-1'
)
print("debug 4") # Printed OK if method 1 is commented
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
Thank you for the review
It was indeed a proxy problem : when http_prxoxy env variable is disabled, it works fine.
Somewhere in my code, a lambda is called to return a true/false response. I am trying to mock this lambda in my unit tests with no success.
This is my code:
def _test_update_allowed():
old = ...
new = ...
assert(is_update_allowed(old, new) == True)
Internally, is_update_allowed calls the lambda, which is what I want to mock.
I tried adding the following code above my test:
import zipfile
import io
import boto3
import os
#pytest.fixture(scope='function')
def aws_credentials():
"""Mocked AWS Credentials for moto."""
os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
os.environ['AWS_SECURITY_TOKEN'] = 'testing'
os.environ['AWS_SESSION_TOKEN'] = 'testing'
CLIENT = boto3.client('lambda', region_name='us-east-1')
# Expected response setup and zip file for lambda mock creation
def lambda_event():
code = '''
def lambda_handler(event, context):
return event
'''
zip_output = io.BytesIO()
zip_file = zipfile.ZipFile(zip_output, 'w', zipfile.ZIP_DEFLATED)
zip_file.writestr('lambda_function.py', code)
zip_file.close()
zip_output.seek(0)
return zip_output.read()
# create mocked lambda with zip file
def mock_some_lambda(lambda_name, return_event):
return CLIENT.create_function(
FunctionName=lambda_name,
Runtime='python2.7',
Role='arn:aws:iam::123456789:role/does-not-exist',
Handler='lambda_function.lambda_handler',
Code={
'ZipFile': return_event,
},
Publish=True,
Timeout=30,
MemorySize=128
)
and then updated my test to:
#mock_lambda
def _test_update_allowed():
mock_some_lambda('hello-world-lambda', lambda_event())
old = ...
new = ...
assert(is_update_allowed(old, new) == True)
But I'm getting the following error, which makes me think it's actually trying to talk to AWS
botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the CreateFunction operation: The security token included in the request is invalid.
From the error message, I can confirm it definitely not an AWS issue. It is clearly stating that it is trying to use some credentials which are not valid. So that boils down to the code.
I am assuming you already have import statements for necessary libs because those are also not visible in the shared code
import pytest
import moto
from mock import mock, patch
from moto import mock_lambda
So you need to use the
def aws_credentials():
.....
while creating the client because from the code I dont see that you are using the same.
#pytest.fixture(scope='function')
def lambda_mock(aws_credentials):
with mock_lambda():
yield boto3.client('lambda', region_name='us-east-1')
and eventually your mock
#pytest.fixture(scope='function')
def mock_some_lambda(lambda_mock):
lambda_mock.create_function(
FunctionName=lambda_name,
Runtime='python2.7',
Role='arn:aws:iam::123456789:role/does-not-exist',
Handler='lambda_function.lambda_handler',
Code={
'ZipFile': return_event,
},
Publish=True,
Timeout=30,
MemorySize=128
)
yield
then test function
def _test_update_allowed(lambda_mock,mock_some_lambda):
lambda_mock.invoke(...)
.....
Cant give a working example, because not sure what the full logic is. Between take a look this post.
The problems seems due to unexisting arn role. Try mocking it like in moto library tests
def get_role_name():
with mock_iam():
iam = boto3.client("iam", region_name=_lambda_region)
try:
return iam.get_role(RoleName="my-role")["Role"]["Arn"]
except ClientError:
return iam.create_role(
RoleName="my-role",
AssumeRolePolicyDocument="some policy",
Path="/my-path/",
)["Role"]["Arn"]
i have a method in my code that sends message to a sqs. i want to use moto and use the aws sqs service.
Below is my code
def posttosqs(self,url,body):
try:
sqs_cli = boto3.client('sqs')
sqs_cli.send_message(QueueUrl=url, MessageBody=body)
except Exception as e:
raise Exception("Posting failed to SQS")
here is my test case
#mock_sqs
#mock_s3
def test_case_use_moto(self):
conn = boto3.resource('s3', region_name='us-east-1')
conn.create_bucket(Bucket='Test')
conn = boto3.client('sqs', region_name='us-east-1')
queue = conn.create_queue(QueueName='Test')
os.environ["SQS_URL"] = queue["QueueUrl"]
conn.send_message(QueueUrl=queue["QueueUrl"], MessageBody="test") #this works
#SQS_URL = "https://queue.amazonaws.com/123456789012/Test"
ctx = context_class_object()
event = {"body": "test"}
resp = lambda.handle_request(event, ctx)
assert resp["statusCode"] == 200
the conn.send_message works in the test case but the method posttosqs fails with
error: when calling the SendMessage operation: The specified queue does not exist for this wsdl version.
I was able to test the S3 operations successfully using above method but not SQS operation
Installing specific moto version worked for me, the issue is with newer versions of moto library.
pip install moto==2.2.2
I have flask python rest api which is called by another flask rest api.
the input for my api is one parquet file (FileStorage object) and ECS connection and bucket details.
I want to save parquet file to ECS in a specific folder using boto or boto3
the code I have tried
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
calling_format = OrdinaryCallingFormat()
client = S3Connection(inputData.access_key_id, inputData.secret_key, port=inputData.ecsport,
host=inputData.ecsEndpoint, debug=2,
calling_format=calling_format)
#client.upload_file(BucketName, inputData.filename, inputData.folderpath)
bucket = client.get_bucket(BucketName,validate=False)
key = boto.s3.key.Key(bucket, inputData.filename)
fileName = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(fileName)
with open(fileName.name) as f:
key.send_file(f)
but it is not working and giving me error like...
signature_host = '%s:%d' % (self.host, port)
TypeError: %d format: a number is required, not str
I tried google but no luck Can anyone help me with this or any sample code for the same.
After a lot of hit and tried and time, I finally got the solution. I posting it for everyone else who are facing the same issue.
You need to use Boto3 and here is the code...
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
#bucket = client.get_bucket(BucketName,validate=False)
f = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(f)
endpointurl = "<your endpoints>"
s3_client = boto3.client('s3',endpoint_url=endpointurl, aws_access_key_id=inputData.access_key_id,aws_secret_access_key=inputData.secret_key)
try:
newkey = 'yourfolderpath/anotherfolder'+inputData.filename
response = s3_client.upload_file(f.name, BucketName,newkey)
except ClientError as e:
logging.error(e)
return False
return True