Glue database connection update username aws cli/boto3 - python

Trying to update Glue database JDBC connection username and keep failing. choices are CLI or boto3.
CLI docs are so limited.
https://docs.aws.amazon.com/cli/latest/reference/glue/update-connection.html
update-connection
[--catalog-id <value>]
--name <value>
--connection-input <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
Can someone guide, how to pass username to update here. Also similar in boto3. Throwing exception of invalid parameter.
response = client.update_connection(
Name='test-db',
ConnectionInput={
'Name': 'test-db',
'ConnectionType': 'JDBC' ,
'ConnectionProperties': {
'Username': username
}
}
)

Try:
'ConnectionProperties': {
'USER_NAME': 'your_user_name',
'PASSWORD' : 'your_user_password'
}
Caution: Above is not tested. Its based on Glue Boto3 documentation from here.

So it supposed to be like this.
'USERNAME': username,
'PASSWORD': password
},
'PhysicalConnectionRequirements': PhysicalConnectionRequirements
}
)

You have to add all of the details for the update_connection:
glue_client = boto3.client('glue')
glue_client.update_connection(
Name='Redshift-Rosko',
ConnectionInput={
'Name': 'Redshift-Rosko',
'ConnectionType': 'JDBC',
'ConnectionProperties': {
"USERNAME": username,
"PASSWORD": password,
"JDBC_ENFORCE_SSL": "true",
"JDBC_CONNECTION_URL": "jdbc:redshift://...:5439/rosko_db",
"KAFKA_SSL_ENABLED": "false"
},
'PhysicalConnectionRequirements': {
'SubnetId': 'subnet-......',
'SecurityGroupIdList': [
'sg-......',
'sg-......',
],
"AvailabilityZone": "eu-central-1a"
}
}
)

Related

BigQueryInsertJobOperator with Export Configuration

I am trying to retrieve GA data from BigQuery using the operators provided in the airflow documentation.
The documentation is not very explicit concerning the usage of the BigQueryInsertJobOperator which is replacing BigQueryExecuteQueryOperator.
My Dag work as follow:
In a Dataset List the table names
Using BigQueryInsertJobOperator query all the table using this syntax from the cookbook:
`{my-project}.{my-dataset}.events_*`
WHERE _TABLE_SUFFIX BETWEEN '{start}' AND '{end}'
select_query_job = BigQueryInsertJobOperator(
task_id="select_query_job",
gcp_conn_id='big_query',
configuration={
"query": {
"query": build_query.output,
"useLegacySql": False,
"allowLargeResults": True,
"useQueryCache": True,
}
}
)
Retrieve the job id from the Xcom and use BigQueryInsertJobOperator with extract in the configuration to get query results, like in this api
However, I receive an error message and I am unable to access the data. All the steps before step 3 are working perfectly, I can see it from the cloud console.
The Operator I tried:
retrieve_job_data = BigQueryInsertJobOperator(
task_id="get_job_data",
gcp_conn_id='big_query',
job_id=select_query_job.output,
project_id=project_name,
configuration={
"extract": {
}
}
)
#Or
retrieve_job_data = BigQueryInsertJobOperator(
task_id="get_job_data",
gcp_conn_id='big_query',
configuration={
"extract": {
"jobId": select_query_job.output,
"projectId": project_name
}
}
)
google.api_core.exceptions.BadRequest: 400 POST https://bigquery.googleapis.com/bigquery/v2/projects/{my-project}/jobs?prettyPrint=false: Required parameter is missing
[2022-08-16, 09:44:01 UTC] {taskinstance.py:1415} INFO - Marking task as FAILED. dag_id=BIG_QUERY, task_id=get_job_data, execution_date=20220816T054346, start_date=20220816T054358, end_date=20220816T054401
[2022-08-16, 09:44:01 UTC] {standard_task_runner.py:92} ERROR - Failed to execute job 628 for task get_job_data (400 POST https://bigquery.googleapis.com/bigquery/v2/projects/{my-project}/jobs?prettyPrint=false: Required parameter is missing; 100144)
Following the above link gives:
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"errors": [
{
"message": "Login Required.",
"domain": "global",
"reason": "required",
"location": "Authorization",
"locationType": "header"
}
],
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CREDENTIALS_MISSING",
"domain": "googleapis.com",
"metadata": {
"service": "bigquery.googleapis.com",
"method": "google.cloud.bigquery.v2.JobService.ListJobs"
}
}
]
}
}
I see that the error is http 401, and I don't have access to gc, which is not normal since my gcp_conn_id works in the other operators (and specifying the project Id!).
For the ExtractJob type, you must pass a destinationUri or destinationUris and sourceTable.
This explains the 401 Required parameter is missing error message.
Now that you have a job_id, you can implement a pre_execute hook in your constructor to fetch the job.
The destinationTable field in the job configuration is needed to configure the Extract job. Even though you have configured the Query job to useQueryCache, Bigquery will store the results in anonymised table.
The configuration for the Query job when it is retrieved looks like:
{
/*...*/
"configuration": {
"query": {
"query": "SELECT weight_pounds, state, year, gestation_weeks FROM [bigquery-public-data:samples.natality] ORDER BY weight_pounds DESC LIMIT 10;",
"destinationTable": {
"projectId": "redacted",
"datasetId": "_redacted",
"tableId": "anon0d85adcadde61fa17550f9841810e343fb5bc82d"
},
"writeDisposition": "WRITE_TRUNCATE",
"priority": "INTERACTIVE",
"useQueryCache": true,
"useLegacySql": true
},
"jobType": "QUERY"
},
/*...*/
}
retrieve_job_data = BigQueryInsertJobOperator(
task_id="get_job_data",
gcp_conn_id='big_query',
job_id=select_query_job.output,
project_id=project_name,
pre_execute=populate_extract_source_table
configuration={
"extract": {
"destinationUris": ["gs://your-bucket/some-path"]
}
}
)
def populate_extract_source_table(ctx):
job_id = ctx['task'].job_id # the job id of the query job
task = ctx['task']
hook = BigQueryHook(
gcp_conn_id=task.gcp_conn_id,
delegate_to=task.delegate_to,
impersonation_chain=task.impersonation_chain,
)
# Retreive job
job = hook.get_job(
project_id=task.project_id,
location=task.location,
job_id=job_id,
)
# Set the sourceTable for the extract insert job to that for the query insert job.
jr = job.to_api_repr()
dag.configuration['extract']['sourceTable'] = jr["configuration"]["query"]["destinationTable"]

How to link Firebase's Authentication with its Realtime Database in python

I am using the firebase library and I've already tried the pyrebase library too but the database request doesn't seem to be authenticated.
Heres a snippet of my code
import firebase
# I have changed the firebase config details for security purposes, but I still can connect to firebase with my valid credentials
firebase_config = {
'apiKey': "asdfghjkl",
'authDomain': "fir-test-6bb4c.firebaseapp.com",
'projectId': "fir-test-6bb4c",
'storageBucket': "fir-test-6bb4c.appspot.com",
'messagingSenderId': "410996397069",
'appId': "1:410996397069:web:856f4181a8d9debf15b144",
'measurementId': "G-Y0Z6PDGKC2",
'databaseURL': 'https://fir-test-6bb4c-default-rtdb.firebaseio.com/'
}
cloud = firebase.Firebase(firebase_config)
auth = cloud.auth()
db = cloud.database()
storage = cloud.storage()
# Sign In
# for security purposes again, I have changed the email, but it is to confirm that I can authenticate successfully with the right email and password
email = "test1234567#gmail.com"
password = "123456789"
auth.sign_in_with_email_and_password(email, password)
# Connect to Real Time Data Base
uid = auth.current_user["localId"]
path = db.child("users/" + uid)
And here's the database rules
{
"rules": {
".read": false,
".write": false,
"users": {
"$folder": {
".read": "auth != null && auth.uid === $folder",
".write": "auth != null && auth.uid === $folder"
}
},
"secrets": {
".read": false,
".write": false
}
}
}
These are a few things I found out while playing with the rules and in the rules playground:
The firebase/pyrebase database requests are not authenticated even though the authentication at auth.sign_in_with_email_and_password(email, password) was successful
In the rules playground I was able to send authenticated request which allowed me to read and write when the uid and folder name matched
I was also looking for method by which rules could print or log somethings for debug purposes, or if the rules could read and write to the database.
I found out that firestore had the debug() function but I would like to stick to the realtime database since I've been using it for a while.

How to add permission can_get_all_acc_detail to an account of existing blockchain network on hyperledger-iroha?

I want to add account, which has some information readable by all users. According to documentation the user needs to have permissions can_get_all_acc_detail. So I'm trying to add those with creating new role:
tx = self.iroha.transaction([
self.iroha.command('CreateRole', role_name='info', permissions=[primitive_pb2.can_get_all_acc_detail])
])
tx = IrohaCrypto.sign_transaction(tx, account_private_key)
net.send_tx(tx)
Unfortunately after sending transaction I see status:
status_name:ENOUGH_SIGNATURES_COLLECTED, status_code:9, error_code:0(OK)
But then it is taking 5 minutes until timeout.
I've notices that transaction json has different way of embedding permissions than in generic block:
payload {
reduced_payload {
commands {
create_role {
role_name: "info_account"
permissions: can_get_all_acc_detail
}
}
creator_account_id: "admin#example"
created_time: 1589408498074
quorum: 1
}
}
signatures {
public_key: "92f9f9e10ce34905636faff41404913802dfce9cd8c00e7879e8a72085309f4f"
signature: "568b69348aa0e9360ea1293efd895233cb5a211409067776a36e6647b973280d2d0d97a9146144b9894faeca572d240988976f0ed224c858664e76416a138901"
}
In compare in genesis.block it is:
{
"createRole": {
"roleName": "money_creator",
"permissions": [
"can_add_asset_qty",
"can_create_asset",
"can_receive",
"can_transfer"
]
}
},
I'm using iroha version 1.1.3 (but also tested on 1.1.1), python iroha sdh version is 0.0.5.5.
does the account you used to execute the 'Create Role' command have the "can_create_role" permission?

Logging in to ecr registry with python docker sdk doesn't work as expected

Assuming I have gotten the ecr credentials from boto already in an object called creds, when I do:
client = from_env()
client.login(creds.username, password=creds.password, registry=creds.endpoint)
I get:
{u'IdentityToken': u'', u'Status': u'Login Succeeded'}
Great so far! And I inspect:
client.api.__dict__
I get:
{'_auth_configs': {'auths': {'registry_i_just_logged_into': {'email': None,
'password': 'xxxxxxxxxxxxx',
'serveraddress': 'registry_i_just_logged_into',
'username': 'xxxxxxx'},
u'some_other_registry': {},
'credsStore': u'osxkeychain'}
.... (etc, etc)
Still so far, so good. But when I then do:
client.images.pull("registry_i_just_logged_into/some_repo", tag="latest")
Or when I do (from a command line):
docker pull registry_i_just_logged_into/some_repo:latest
I always get:
Error response from daemon: pull access denied for some_repo, repository does not exist or may require 'docker login'
Despite the fact that, if I do (with the same username and password I used to log in):
client.images.pull("registry_i_just_logged_into/some_repo", tag="latest", auth_config={'username': creds.username, 'password': creds.password})
It works no problems.
So I am assuming this is a problem with the order for resolving which registry to use, but it seems like the docker sdk should handle this if the key already exists within _auth_configs.
What am I doing wrong?
Thanks!
Short:
rm -rf ~/.docker/config.json
Long:
Remove credsStore, auths and credSstore properties from ~/.docker/config.json
Explanation:
Probably, you have already tried to login. So your Docker config.json has credsStore, auths and credSstore properties.
E.g.:
"credSstore" : "osxkeychain",
"auths" : {
"acc_id_1.dkr.ecr.us-east-1.amazonaws.com" : {
},
"acc_id_2.dkr.ecr.us-east-1.amazonaws.com" : {
},
"https://acc_id_1.dkr.ecr.us-east-1.amazonaws.com" : {
},
"https://acc_id_2.dkr.ecr.us-east-1.amazonaws.com" : {
}
},
"HttpHeaders" : {
"User-Agent" : "Docker-Client/18.06.1-ce (darwin)"
},
"credsStore" : "osxkeychain"
}
token = client.get_authorization_token() returns base64 encoded token. So to successfully login you need to decode it.
import docker
import boto3
import base64
docker_client = docker.from_env()
client = boto3.client('ecr', aws_access_key_id="xyz", aws_secret_access_key="abc", region_name="ap-south-1")
token = client.get_authorization_token()
docker_client.login(username="AWS", password=base64.b64decode(token["authorizationData"][0]["authorizationToken"]).decode().split(":")[1], registry="xxxx.dkr.ecr.ap-south-1.amazonaws.com")
will return
{'IdentityToken': '', 'Status': 'Login Succeeded'}

How to set a geolocation on a DNS record in route53 with boto

I the following snip of code I'm adding a value to a change set that I'll later commit.
add = changes.add_change('CREATE', url, record_type, ttl=DEFAULT_TTL)
add.add_value(new_val)
How would I add a geolocation to the created record? I can see in the docs at [http://boto.readthedocs.org/en/latest/ref/route53.html#module-boto.route53.record] that I should be able to add a region for latency based routing by adding a region="blah" argument. However, I don't see any mention of geolocation. Is the library capable of handling a geolocation routing policy? Or do I just need to stick with a latency routing policy.
Please try below snippet. Try installing boto3 by "pip install boto3"
import boto3
client = boto3.client('route53')
response = client.change_resource_record_sets(
HostedZoneId='ZYMJVBD6FUN6S',
ChangeBatch={
'Comment': 'comment',
'Changes': [
{
'Action': 'CREATE',
'ResourceRecordSet': {
'Name': 'udara.com',
'Type': 'A',
'SetIdentifier': 'Africa record',
'GeoLocation': {
'ContinentCode': 'AF'
},
'TTL': 123,
'ResourceRecords': [
{
'Value': '127.0.0.1'
},
],
}
},
]
}
)

Categories

Resources