I am trying to send a CSV file to an SFTP server using a Google Cloud Function.
This is the Python script I am using -
import paramiko
import os
def hello_sftp(event, context):
myPassword = os.environ.get('SFTP_PASSWORD')
host = "HostName"
username = "TestUser"
password = myPassword
file_name = 'test.csv''
port = 22
transport = paramiko.Transport((host, port))
destination_path = "/"+file_name
local_path = "gs://testbucket/"+file_name #GCP Bucket address
transport.connect(username = username, password = password)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put(local_path, destination_path)
sftp.close()
transport.close()
To authenticate to the SFTP server I need to use the RSA file. In the Secret Manager I have uploaded the Secret value in the Secret Manager and Used the value as an Environment variable in the Google Cloud Function. But I think I am doing something wrong here -
myPassword = os.environ.get('SFTP_PASSWORD')
because I this line I think I am getting this error deploying message -
Deployment failure:
Function failed on loading user code. This is likely due to a bug in the user code.
Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
In the logs I can find the following error message -
2022-01-23T23:31:59.838337ZCloud FunctionsCreateFunctioneurope-west3:function-SFTPxx#xx.com {#type: type.googleapis.com/google.cloud.audit.AuditLog, authenticationInfo: {…}, authorizationInfo: […], methodName: google.cloud.functions.v1.CloudFunctionsService.CreateFunction, request: {…}, requestMetadata: {…}, resourceLocation: {…}, resourceName: projects/testServer-test/locations/eur…
{#type: type.googleapis.com/google.cloud.audit.AuditLog, authenticationInfo: {…}, authorizationInfo: […], methodName: google.cloud.functions.v1.CloudFunctionsService.CreateFunction, request: {…}, requestMetadata: {…}, resourceLocation: {…}, resourceName: projects/testServer-test/locations/eur…
Can anyone point me, to where I am doing wrong here or the Script is wrong??
From your error message looks like function fails to deploy altogether after trying to parse your code so you don't even get to the other part (functions code).
Check your code for spelling mistakes, spacing aka formatting etc, make sure it runs outside of function then when deploying you will need to wrap it in a function and set whatever triggers it.
Don't forget requirements.txt to specify libraries you are importing in your code to be installed # invocation (you can also version them within there).
After looking at your sample code for starter remove extra ' at the end of .cvs'' in file_name = 'test.csv'' line to aka file_name = 'test.csv'.
Then to check the rest of your codes behavior try to put print statements before using variables downstream those print statements will show up in your logs after you trigger your deployed function and you can see what variables looked like # invocation.
Related
I am trying to access some mailboxes using the Pop3 protocol through my works proxy. I can get my oauth token, and access my information using Chilkat2 HTTP Library.
Using this code gets me a response of my profile:
http = chilkat2.Http()
http.AuthToken = accesstoken
http.ProxyDomain = "Some Proxy"
http.ProxyPort = Some Port
answer = http.QuickGetStr("https://graph.microsoft.com/v1.0/me")
print(answer)
which retunrs this
{"#odata.context":"https://graph.microsoft.com/v1.0/$metadata#users/$entity"................ ect
This returns my profile details from the Windows servers so I know my oauth2 token is valid and works.
I then try and use the MailMan protocol to open up my POP3 Mailbox I run in to an authentication error, I run the code below
mailman = chilkat2.MailMan()
mailman.HttpProxyHostname = "Some Proxy"
mailman.HttpProxyPort = Some Port
mailman.MailHost = "outlook.office365.com"
mailman.MailPort = 995
mailman.PopSsl = True
mailman.PopUsername = username
mailman.PopPassword = ""
mailman.OAuth2AccessToken = accesstoken
mailman.Pop3EndSession() #close session as program keeps breaking and leaving session open
success = mailman.Pop3Connect()
if (success != True):
print(mailman.LastErrorText)
sys.exit()
# Authenticate..
success = mailman.Pop3Authenticate()
if (success != True):
print(mailman.LastErrorText)
sys.exit()
However the authenticate command always returns as false, the Chilkat error log shows as below:
ChilkatLog:
Pop3Authenticate:
DllDate: Feb 9 2021
ChilkatVersion: 9.5.0.86
UnlockPrefix: Auto unlock for 30-day trial
Architecture: Little Endian; 64-bit
Language: Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)], win32
VerboseLogging: 0
Pop3Authenticate:
username: my username
popSPA: 0
greeting: +OK The Microsoft Exchange POP3 service is ready. [TABPADIAUAAyADYANQBDAEEAMAAzADgANgAuAEcAQgBSAFAAMgA2ADUALgBQAFIATwBEAC4ATwBVAFQATABPAE8ASwAuAEMATwBNAA==]
pop_office365_xoauth2:
PopCmdSent: AUTH XOAUTH2
PopCmdResp: +
auth_xoauth2_response_1: +
PopCmdSent: <base64 string in XOAUTH2 format>
PopCmdResp: -ERR Authentication failure: unknown user name or bad password.
POP3 response indicates failure.
AUTH_XOAUTH2_response: -ERR Authentication failure: unknown user name or bad password.
--pop_office365_xoauth2
POP3 authentication failed
--Pop3Authenticate
Failed.
--Pop3Authenticate
--ChilkatLog
I am at a loss, I have used every combination of scope that should allow me this access with my token, but I cannot get it to work with any of the email libraries in Chilkat; seems to connect fine with the servers but always fails the authentication. Anyone got any idea on this?
My guess is that something wasn't setup correctly in Azure, and/or you're not asking for the correct scopes when getting the OAuth2 access token.
This example has comments that describe the App Registration in Azure Active Directory: https://www.example-code.com/chilkat2-python/office365_oauth2_access_token.asp
At each step, there is a link to the Azure Console screen as I was doing it.
Also, your scope should look like this:
oauth2.Scope = "openid profile offline_access https://outlook.office365.com/SMTP.Send https://outlook.office365.com/POP.AccessAsUser.All https://outlook.office365.com/IMAP.AccessAsUser.All"
There are places in the Microsoft documentation where the scopes are different (or old?).
The interactive 3-legged OAuth2 flow only has to be done once. After you have the OAuth2 access token, you can continually refresh without user interaction
as shown here: https://www.example-code.com/chilkat2-python/office365_refresh_access_token.asp
Alternatively, you can try the resource owner grant flow as shown here:
https://www.example-code.com/chilkat2-python/office365_resource_owner_password_credentials_grant.asp
The resource-owner flow is non-interactive and is for apps accessing its own O365 account.
Please let me know if that helps.
So I'm using the sample code provided by Twilio to send an SMS to my phone. When using the sample code I keep getting the same error and I'm not sure what the error is. I have already tried searching through this website to see if anyone has had any similar errors I also followed the Twilio step by step process for setting up and receiving the SMS. So I am very confused as to why this is happening. None of the resources I used gave me an answer as to why this is happening. My phone number is starred out for safety reasons.
# Download the helper library from https://www.twilio.com/docs/python/install
import os
from twilio.rest import Client
# Your Account Sid and Auth Token from twilio.com/console
# and set the environment variables. See http://twil.io/secure
account_sid = os.environ['ACa30b8fa59e7eba0e6dc5dbedaf5e8fcb']
auth_token = os.environ['REDACTED']
client = Client(account_sid, auth_token)
message = client.messages \
.create(
body="What is up Med",
from_='+***********',
to='+***********'
)
print(message.sid)
Error below:
Traceback (most recent call last):
File "/Users/administrator/Documents/Unit 5 Proj.py", line 8, in <module>
account_sid = os.environ['ACa30b8fa59e7eba0e6dc5dbedaf5e8fcb']
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/os.py", line 679, in __getitem__
raise KeyError(key) from None
KeyError: 'ACa30b8fa59e7eba0e6dc5dbedaf5e8fcb'
You are mixing up two concepts here:
Hard coding your Sid and Auth Token values in your code
Pulling your Sid and Auth Token vaules from the execution environment.
You only want to do one of these or the other, not both. The most straightforward approach would be to do just #1, which would look like this:
account_sid = 'ACa30b8fa59e7eba0e6dc5dbedaf5e8fcb'
auth_token = 'REDACTED'
If you want to do #2, then you need to define two exported variables in the environment in which you are running your program. It looks like you're on a Mac, so you could do something like this in, say, your ~/.bash_profile file:
export TWILIO_SID=ACa30b8fa59e7eba0e6dc5dbedaf5e8fcb
export TWILIO_AUTH_TOKEN=REDACTED
and then do this in your Python code:
account_sid = os.environ['TWILIO_SID']
auth_token = os.environ['TWILIO_AUTH_TOKEN']
UPDATE: I'll add that #2 is much preferred in production code because it avoids you having to check in sensitive account credentials to a shared source code repository that other people will see. That's the whole reason for the added complexity. If you're just fooling around or doing a personal project, then #1 might be fine. If you're doing work where your code will be seen by others, do #2 to keep your credentials to yourself.
Read your error!
Your call to os.eviron[your-key] is erroring with raise KeyError(key) from None KeyError. When googled, this error occurs when you are trying to access a missing key from the dictionary in os.eviron. Did you set your keys in env variables? If the key you are indexing in your code is your account_sid and auth_token, then you are doing it wrong for sure
I am attempting to create a python script to connect to and interact with my AWS account. I was reading up on it here https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
and I see that it reads your credentials from ~/.aws/credentials (on a Linux machine). I however and not connecting with an IAM user but SSO user. Thus, the profile connection data I use is located at ~/.aws/sso/cache directory.
Inside that directory, I see two json files. One has the following keys:
startUrl
region
accessToken
expiresAt
the second has the following keys:
clientId
clientSecret
expiresAt
I don't see anywhere in the docs about how to tell it to use my SSO user.
Thus, when I try to run my script, I get error such as
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the DescribeSecurityGroups operation: AWS was not able to validate the provided access credentials
even though I can run the same command fine from the command prompt.
This was fixed in boto3 1.14.
So given you have a profile like this in your ~/.aws/config:
[profile sso_profile]
sso_start_url = <sso-url>
sso_region = <sso-region>
sso_account_id = <account-id>
sso_role_name = <role>
region = <default region>
output = <default output (json or text)>
And then login with
$ aws sso login --profile sso_profile
You will be able to create a session:
import boto3
boto3.setup_default_session(profile_name='sso_profile')
client = boto3.client('<whatever service you want>')
So here's the long and hairy answer tested on boto3==1.21.39:
It's an eight-step process where:
register the client using sso-oidc.register_client
start the device authorization flow using sso-oidc.start_device_authorization
redirect the user to the sso login page using webbrowser.open
poll sso-oidc.create_token until the user completes the signin
list and present the account roles to the user using sso.list_account_roles
get role credentials using sso.get_role_credentials
create a new boto3 session with the session credentials from (6)
eat a cookie
Step 8 is really key and should not be overlooked as part of any successful authorization flow.
In the sample below the account_id should be the account id of the account you are trying to get credentials for. And the start_url should be the url that aws generates for you to start the sso flow (in the AWS SSO management console, under Settings).
from time import time, sleep
import webbrowser
from boto3.session import Session
session = Session()
account_id = '1234567890'
start_url = 'https://d-0987654321.awsapps.com/start'
region = 'us-east-1'
sso_oidc = session.client('sso-oidc')
client_creds = sso_oidc.register_client(
clientName='myapp',
clientType='public',
)
device_authorization = sso_oidc.start_device_authorization(
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
startUrl=start_url,
)
url = device_authorization['verificationUriComplete']
device_code = device_authorization['deviceCode']
expires_in = device_authorization['expiresIn']
interval = device_authorization['interval']
webbrowser.open(url, autoraise=True)
for n in range(1, expires_in // interval + 1):
sleep(interval)
try:
token = sso_oidc.create_token(
grantType='urn:ietf:params:oauth:grant-type:device_code',
deviceCode=device_code,
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
)
break
except sso_oidc.exceptions.AuthorizationPendingException:
pass
access_token = token['accessToken']
sso = session.client('sso')
account_roles = sso.list_account_roles(
accessToken=access_token,
accountId=account_id,
)
roles = account_roles['roleList']
# simplifying here for illustrative purposes
role = roles[0]
role_creds = sso.get_role_credentials(
roleName=role['roleName'],
accountId=account_id,
accessToken=access_token,
)
session = Session(
region_name=region,
aws_access_key_id=role_creds['accessKeyId'],
aws_secret_access_key=role_creds['secretAccessKey'],
aws_session_token=role_creds['sessionToken'],
)
Your current .aws/sso/cache folder structure looks like this:
$ ls
botocore-client-XXXXXXXX.json cXXXXXXXXXXXXXXXXXXX.json
The 2 json files contain 3 different parameters that are useful.
botocore-client-XXXXXXXX.json -> clientId and clientSecret
cXXXXXXXXXXXXXXXXXXX.json -> accessToken
Using the access token in cXXXXXXXXXXXXXXXXXXX.json you can call get-role-credentials. The output from this command can be used to create a new session.
Your Python file should look something like this:
import json
import os
import boto3
dir = os.path.expanduser('~/.aws/sso/cache')
json_files = [pos_json for pos_json in os.listdir(dir) if pos_json.endswith('.json')]
for json_file in json_files :
path = dir + '/' + json_file
with open(path) as file :
data = json.load(file)
if 'accessToken' in data:
accessToken = data['accessToken']
client = boto3.client('sso',region_name='us-east-1')
response = client.get_role_credentials(
roleName='string',
accountId='string',
accessToken=accessToken
)
session = boto3.Session(aws_access_key_id=response['roleCredentials']['accessKeyId'], aws_secret_access_key=response['roleCredentials']['secretAccessKey'], aws_session_token=response['roleCredentials']['sessionToken'], region_name='us-east-1')
A well-formed boto3-based script should transparently authenticate based on profile name. It is not recommended to handle the cached files or keys or tokens yourself, since the official code methods might change in the future. To see the state of your profile(s), run aws configure list --examples:
$ aws configure list --profile=sso
Name Value Type Location
---- ----- ---- --------
profile sso manual --profile
The SSO session associated with this profile has expired or is otherwise invalid.
To refresh this SSO session run aws sso login with the corresponding profile.
$ aws configure list --profile=old
Name Value Type Location
---- ----- ---- --------
profile old manual --profile
access_key ****************3DSx shared-credentials-file
secret_key ****************sX64 shared-credentials-file
region us-west-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
What works for me is the following:
import boto 3
session = boto3.Session(profile_name="sso_profile_name")
session.resource("whatever")
using boto3==1.20.18.
This would work if you had previously configured SSO for aws ie. aws configure sso.
Interestingly enough, I don't have to go through this if I use ipython, I just aws sso login beforehand and then call boto3.Session().
I am trying to figure out whether there is something wrong with my approach - I fully agree with what was said above with respect to transparency and although it is a working solution, I am not in love with it.
EDIT: there was something wrong and here is how I fixed it:
run aws configure sso (as above);
install aws-vault - it basically replaces aws sso login --profile <profile-name>;
run aws-vault exec <profile-name> to create a sub-shell with AWS credentials exported to environment variables.
Doing so, it is possible to run any boto3 command both interactively (eg. iPython) and from a script, as in my case. Therefore, the snippet above simply becomes:
import boto 3
session = boto3.Session()
session.resource("whatever")
Here for further details on AWS vault.
I'm running through Sendgrid's intro material for Python but executing the example code throws a 403-Forbidden error.
Steps I took:
Create API Key & sendgrid.env file as instructed.
Create a conda environment with python 3.5: conda create -n sendgrid python=3.5
Install sendgrid: (sendgrid) pip install sendgrid
Run example: (sendgrid) python main.py
Where main.py contains the exact code copied from the example page linked above.
Issue: Running main.py throws the error HTTP Error 403: Forbidden.
Things I've tried:
I tried switching out the to and from emails in that example, but doesn't change the result.
I also tried the same flow but using NodeJS but same result.
Any ideas on what I'm doing wrong?
Give API Key its full access, follow steps:
Settings
API Keys
Edit API Key
Full Access
Update
Whitelist your domain, follow steps:
Settings
Sender Authentication
Domain Authentication
Select DNS Host
Enter your domain name
Copy all records and put them in your Advanced DNS management console
NOTE: When adding records, make sure not to have domain name in the host. Crop it out.
If you do not want to authenticate domain, you can try with Single Sender Verification as well.
Note: It might take some time for records to start functioning.
If you're using pylinter, e.message will say
Instance of 'Exception' has no 'message' member
This is because message attribute is generated dynamically by sendgrid which pylinter is unable to access as it doesn't exists before runtime.
So, to prevent that, at the top of your file or above print(e.message) line, you need to add either one of the below, they mean the same thing-
# pylint: disable=no-member
E1101 is code to no-member, fine more here
# pylint: disable=E1101
Now the code below should work for you. Just make sure you have SENDGRID_API_KEY set in environment. If not, you may also directly replace it with os.environ.get("SENDGRID_API_KEY") which is not a good practice though.
# pylint: disable=E1101
import os
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
message = Mail(
from_email="from_email#your-whitelisted-domain.com",
to_emails=("recipient1#example.com", "recipient2#example.com"),
subject="Sending with Twilio SendGrid is Fun",
html_content="<strong>and easy to do anywhere, even with Python</strong>")
try:
sg = SendGridAPIClient(os.environ.get("SENDGRID_API_KEY"))
response = sg.send(message)
print(response.status_code)
print(response.body)
print(response.headers)
except Exception as e:
print(e.message)
I wrote a Python script to pull images from AWS ECR to an Ubuntu instance. On this instance, I run docker commands with sudo as Docker is not setup as a non-root user.
I do use sudo when invoking the script. What I find is if I am currently logged into AWS ECR first and then I run the script, it works as expected. However, if I am not logged in, and the auth token is expired, it appears that docker login works, but when I try and pull I get a message indicating that the "repository does not exist or my require 'docker login'".
Examining logs, it verifies this with
Feb 15 06:00:38 ubuntu-xenial dockerd[1388]:
time="2019-02-15T06:00:38.832827449Z" level=error msg="Not continuing
with pull after error: denied: Your Authorization Token has expired.
Please run 'aws ecr get-login --no-include-email' to fetch a new one."
def log_into_aws_ecr(docker_client, region):
# To do, set region
ecr_client = boto3.client('ecr', region_name=region)
# Get all repos
response = ecr_client.describe_repositories()
repo_names = []
repositories = response.get('repositories', [])
for repo in repositories:
name = repo.get('repositoryName', '')
if len(name):
repo_names.append(name)
token = ecr_client.get_authorization_token()
username, password = base64.b64decode(token['authorizationData'][0]['authorizationToken']).decode('utf-8').split(":")
registry_url = token['authorizationData'][0]['proxyEndpoint']
login_results = docker_client.login(username, password, email='', registry=registry_url)
prefix='https://'
if registry_url.startswith(prefix):
registry = registry_url[len(prefix):]
else:
registry = registry_url
auth_config_payload = {'username': username, 'password': password }
return ecr_client, repo_names, registry
Please note that this code is also being re-factored now, so there are some variables that are defined but not currently in use.
The supplied docker_client is obtained via the line
docker_client = docker.from_env()
I've tried running as
sudo -E ./myscript.py image
But this doesn't work either. I have a variant of this using a bash script and that works fine.
Output for the docker_client.login looks like
Looking for auth entry for 'ABCXYZ.dkr.ecr.us-west-2.amazonaws.com'
Found 'ABCXYZ.dkr.ecr.us-west-2.amazonaws.com'
And if I dump the response, it looks like this.
{'password': 'PASSWORD HERE', 'email': None, 'username': 'AWS',
'serveraddress': 'ABCXYZ.dkr.ecr.us-west-2.amazonaws.com'}
Okay, I'm not quite sure if this is fully correct, however, based on some experimentation over the last few days, it does work. I also did open up an issue on the docker-py GitHub repo, but at least as of now, no one has chimed in.
https://github.com/docker/docker-py/issues/2256
In a nutshell, which I've listed in the link, here is what I came up with:
Okay, I ran some experiments the last few days. Had to deal with the 12 hour AWS ECR ticket so it took a little longer to do.
It does seem that there is an issue with docker-py.
Based on my findings, I can either use boto3 or run a sub-process
calling the command line to aws ecr. However the only permutation
that seems to work with the following steps.
use a sub-process to perform the docker login. This will result in the config.json file being updated (not sure if this has any
relevance at all or not).
Create the docker client via docker_client = docker.from_env(). I have found doing this prior to the sub-process results in it not
working properly (unless you have an already valid config.json
Then call docker_client.login(username=username, password=password, registry=registry_url)
Whether or not this is expected or not or if I'm doing something
wrong, I don't know. This is what I've come up with as steps that
work.
get_authorization_token returns username and password but it is base encoded. See https://github.com/aws/aws-cli/blob/develop/awscli/customizations/ecr.py#L53:L54