I'm trying to create an AWS Lambda webservice that takes a payload with a new username / password to create a new database and user in an RDS instance.
I'd like to use Boto3 to accomplish this, but I can't seem to find any documentation for this function.
Is this possible using this setup?
Currently AWS SDKs for RDS(Including Boto3 SDK) does not support this nor the AWS CLI.
Its because, creating DB users unique to each DB instance type (mysql, oracle & etc).
The option you have is to run a DDL query using your respective database driver.
http://boto3.readthedocs.io/en/latest/reference/services/rds.html#RDS.Client.generate_db_auth_token documents how create an auth token for connecting to an RDS instance and http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html covers other setup details.
Related
Do we have any possibility to reset aws RDS master password using python script?
Any help would be appreciate
You can use modify_db_instance() to change one or more database configuration parameters including MasterUserPassword.
You might then need to reboot_db_instance() to apply the changes.
Solution: In order to change RDS master password on aws you have pretty nice options to perform. Modify the instance/cluster options/attributes using:
aws cli documentation and modify the db instance / cluster https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html
python aws sdk boto 3 - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.modify_db_instance
changing it through IAAC tools such as terraform - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance
Changing manually on RDS dashboard under modification tab.
Moving this question from DevOps Stack Exchange where it got only 5 views in 2 days:
I would like to query an Azure Database for MySQL Single Server.
I normally interact with this database using a universal database tool (dBeaver) installed onto an Azure VM. Now I would like to interact with this database using Python from outside Azure. Ultimately I would like to write an API (FastAPI) allowing multiple users to connect to the database.
I ran a simple test from a Jupyter notebook, using SQLAlchemy as my ORM and specifying the pem certificate as a connection argument:
import pandas as pd
from sqlalchemy import create_engine
cnx = create_engine('mysql://XXX', connect_args={"ssl": {"ssl_ca": "mycertificate.pem"}})
I then tried reading data from a specific table (e.g. mytable):
df = pd.read_sql('SELECT * FROM mytable', cnx)
Alas I ran into the following error:
'Client with IP address 'XX.XX.XXX.XXX' is not allowed to connect to
this MySQL server'.
According to my colleagues, a way to fix this issue would be to whitelist my IP address.
While this may be an option for a couple of users with static IP addresses I am not sure whether it is a valid solution in the long run.
Is there a better way to access an Azure Database for MySQL Single Server from outside Azure?
As mentioned in comments, you need to whitelist the IP address ranges(s) in the Azure portal for your MySQL database resource. This is a well accepted and secure approach.
I have a script which checks if a specific value is inside a cell in a dynamodb table in AWS. I used to add hardcoded credentials containing the secret key in my script such as this:
dynamodb_session = Session(aws_access_key_id='access_key_id',
aws_secret_access_key='secret_access_key',
region_name='region')
dynamodb = dynamodb_session.resource('dynamodb')
table=dynamodb.Table('table_name')
Are there any other ways to use those credentials without adding them to my script ? Thank you.
If you are running that code on an Amazon EC2 instance, then you simply need to assign an IAM Role to the instance and it will automatically receive credentials.
If you are running that code on your own computer, then use the AWS Command-Line Interface (CLI) aws configure command to store the credentials in a local configuration file. (It will be stored in ~/.aws/credentials).
Then, in both cases, you can simply use:
dynamodb = boto3.resource('dynamodb')
You can set the default region in that configuration too.
I am trying to connect to redshift with python through lambda. The purpose is to perform queries on the redshift database.
I've tried this by getting the temp aws credentials and connecting with psycopg2, but it isn't successful without any error messages. (IE: the lambda just time out)
rs_host = "mytest-cluster.fooooooobaaarrrr.region111111.redshift.amazonaws.com"
rs_port = 5439
rs_dbname = "dev"
db_user = "barrr_user"
def lambda_handler(events, contx):
# The cluster_creds is able to be obtained successfully. No issses here
cluster_creds = client.get_cluster_credentials(DbUser=db_user,
DbName=rs_dbname,
ClusterIdentifier="mytest-cluster",
AutoCreate=False)
try:
# It is this psycopg2 connection that cant work...
conn = psycopg2.connect(host=rs_host,
port=rs_port,
user=cluster_creds['DbUser'],
password=cluster_creds['DbPassword'],
database=rs_dbname
)
return conn
except Exception as e:
print(e)
Also, the lambda execution role itself has these policies:
I am not sure why am I still not able to connect to redshift via python to perform queries.
I have also tried with the sqlalchemy libary but no luck there.
As what Johnathan Jacobson mentioned above. It was the security groups and network permissions that caused my problem.
You can maybe review the documentation at Create AWS Lambda Function to Connect Amazon Redshift with C-Sharp in Visual Studio
Since you have already your code in Python, you can concentrate on the networking part of the tutorial
While launching AWS Lambda functions, it is possible to select a VPC and subnet where the serverless lambda function servers will spinup
You can choose exactly the same VPC and the subnet(s) where you have created your Amazon Redshift cluster
Also, revise the IAM role you have attached to the AWS Lambda function. It requires additionally the AWSLambdaVPCAccessExecutionRole policy
This will be solving issues between connections from different VPCs
Again, even you have launched the lambda function in the same VPC and subnet with Redshift cluster, it is better to check the security group of the cluster so that it accepts connections
Hope it works,
I'm a complete noob with Python and boto and trying to establish a basic connection to ec2 services.
I'm running the following code:
ec2Conn = boto.connect_ec2('username','password')
group_name = 'python_central'
description = 'Python Central: Test Security Group.'
group = ec2Conn.create_security_group(group_name, description)
group.authorize('tcp', 8888,8888, '0.0.0.0/0')
and getting the following error:
AWS was not able to validate the provided access credentials
I've read some posts that this might be due to time difference between my machine and the EC2 server but according to the logs, they are the same:
host:ec2.us-east-1.amazonaws.com x-amz-date:20161213T192005Z
host;x-amz-date
515db222f793e7f96aa93818abf3891c7fd858f6b1b9596f20551dcddd5ca1be
2016-12-13 19:20:05,132 boto [DEBUG]:StringToSign:
Any idea how to get this connection running?
Thanks!
Call made to the AWS API require authentication via Access Key and Secret Key. These can be obtained from the Identity and Access Management (IAM) console, under the Security Credentials tab for a user.
See: Getting Your Access Key ID and Secret Access Key
If you are unfamiliar with Python, you might find it easier to call AWS services by using the AWS Command-Line Interface (CLI). For example, this single-line command can launch an Amazon EC2 instance:
aws ec2 run-instances --image-id ami-c2d687ad --key-name joe --security-group-id sg-23cb34f6 --instance-type t1.micro
See: AWS CLI run-instances documentation