I create credentials (tried both write and manager) on the web interface and include {"HMAC":true}
I have used these credentials for more basic actions such as put_object and upload_file successfully.
However, I cannot get generate_presigned_post to work. It generates the following error:
ibm_botocore.exceptions.UnsupportedSignatureVersionError: Signature
version is not supported: oauth-presign-post
I run the following code,
import ibm_boto3
from ibm_botocore.client import Config
class COSPresignedURL():
def __init__(self, config):
cfg = Config(signature_version='oauth', s3={"payload_signing_enabled":True})
self.cos = ibm_boto3.client(
's3',
ibm_api_key_id=config['api_key'],
ibm_service_instance_id=config['instance_id'],
ibm_auth_endpoint=config['auth_endpoint'],
endpoint_url=config['url_endpoint'],
config=cfg)
def generate(self, bucket, key, Fields=None, Conditions=None, ExpiresIn=300):
return self.cos.generate_presigned_post(bucket, key, Fields, Conditions, ExpiresIn)
def main():
config = {
"api_key" : "VALUE OF apikey FROM CLOUD CREDENTIALS",
"instance_id" : "VALUE OF resource_instance_id FROM CLOUD CREDENTIALS",
"auth_endpoint" : "https://iam.cloud.ibm.com/identity/token",
"url_endpoint" : "https://s3.eu-de.cloud-object-storage.appdomain.cloud"
}
bucket = 'somebucket'
poster = COSPresignedURL(config)
uri = poster.generate(bucket, 'somekey')
print(f'{uri}')
if __name__ == '__main__':
main()
which generates the following error in full,
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 149, in sign
auth = self.get_auth_instance(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 222, in get_auth_instance
signature_version=signature_version)
ibm_botocore.exceptions.UnknownSignatureVersionError: Unknown Signature Version: oauth-presign-post.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tmp.py", line 35, in <module>
main()
File "tmp.py", line 30, in main
uri = poster.generate(bucket, 'somekey')
File "tmp.py", line 16, in generate
return self.cos.generate_presigned_post(bucket, key, Fields, Conditions, ExpiresIn)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 714, in generate_presigned_post
expires_in=expires_in)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 526, in generate_presigned_post
'PutObject', request, region_name, 'presign-post')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 153, in sign
signature_version=signature_version)
ibm_botocore.exceptions.UnsupportedSignatureVersionError: Signature version is not supported: oauth-presign-post
I have that working as following. When I have created the client and connected to the COS endpoint, my final call looks like this:
theURL=cos.generate_presigned_url('get_object',
Params = {'Bucket': buckets[0],
'Key': objects[0]
},
ExpiresIn = 100)
It seems that I use that function with different parameters. I constructed the cos object as shown here:
cos = ibm_boto3.client('s3',
config["apikey"],
endpoint_url='https://'+cos_host,
aws_access_key_id=config["cos_hmac_keys"["access_key_id"],
aws_secret_access_key=config["cos_hmac_keys"]["secret_access_key"])
As you can see, I passed the HMAC details.
Related
I am testing the python-youtube package, using the following code:
from pyyoutube import Api
api = Api(
client_id="yes-my-client-id-here-i-know",
client_secret="yes-my-client-secret-here-i-know")
authorization_url = api.get_authorization_url()
access_token = api.generate_access_token(authorization_response=authorization_url[0])
print('authorization_url : >>>>>> ', authorization_url)
print('access_token : >>>>>> ', access_token)
I get the following error.
Traceback (most recent call last):
File "main.py", line 10, in <module>
access_token = api.generate_access_token(authorization_response=authorization_url[0])
File "/Users/sleento/production/python-youtube-api/env/lib/python3.8/site-packages/pyyoutube/api.py", line 263, in generate_access_token
token = oauth_session.fetch_token(
File "/Users/sleento/production/python-youtube-api/env/lib/python3.8/site-packages/requests_oauthlib/oauth2_session.py", line 244, in fetch_token
self._client.parse_request_uri_response(
File "/Users/sleento/production/python-youtube-api/env/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/clients/web_application.py", line 220, in parse_request_uri_response
response = parse_authorization_code_response(uri, state=state)
File "/Users/sleento/production/python-youtube-api/env/lib/python3.8/site-packages/oauthlib/oauth2/rfc6749/parameters.py", line 284, in parse_authorization_code_response
raise MissingCodeError("Missing code parameter in response.")
oauthlib.oauth2.rfc6749.errors.MissingCodeError: (missing_code) Missing code parameter in response.
I solve that, I was using client_id instead of api_key parameter.
from pyyoutube import Api
api = Api(
api_key="yes-my-APi-key-here-i-know")
authorization_url = api.get_authorization_url()
print('authorization_url : >>>>>> ', authorization_url)
Apparently the ImageSource attribute of vision requests should allow "A publicly-accessible image HTTP/HTTPS URL"
https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.Image
https://googleapis.dev/python/vision/latest/vision_v1/types.html
Here, the Google API core seems to only expect a google storage uri link.
Any ideas what is going on here?
Environment details
OS type and version: macOS Big Sur 11.0.1
Python version: 3.9.1`
pip version: 20.3.3
google-cloud-vision version: 2.0.0
Code example
from google.cloud import vision
client = vision.ImageAnnotatorClient()
requests = []
labels = []
# I had an array of image urls in a json file, but using the same url over and over gives the same error
images = range(100)
for img in images:
source = {"image_uri": "https://images.unsplash.com/photo-1544845120-288673aefccc?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjEyMDd9"}
image = {"source": source}
features = [
{"type_": vision.Feature.Type.LABEL_DETECTION}
]
requests.append({"image": image, "features": features})
# The max number of responses to output in each JSON file
batch_size = 2 # max batch size
gcs_destination = {"uri": "gs://imagesort/results/"}
output_config = {"gcs_destination": gcs_destination,
"batch_size": batch_size}
operation = client.async_batch_annotate_images(
requests=requests, output_config=output_config)
print("Waiting for operation to complete...")
response = operation.result(90)
Stack trace
Traceback (most recent call last):
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/grpc/_channel.py", line 923, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/grpc/_channel.py", line 826, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid gcs prefix provided in request image.source.image_uri field. Valid prefixes must start with 'gs://'."
debug_error_string = "{"created":"#1611278568.485631000","description":"Error received from peer ipv6:[2404:6800:4015:800::200a]:443","file":"src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"Invalid gcs prefix provided in request image.source.image_uri field. Valid prefixes must start with 'gs://'.","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/georgeoconnor/imagesort/test.py", line 22, in <module>
operation = client.async_batch_annotate_images(
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/cloud/vision_v1/services/image_annotator/client.py", line 493, in async_batch_annotate_images
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/api_core/retry.py", line 281, in retry_wrapped_func
return retry_target(
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/Users/georgeoconnor/imagesort/env/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Invalid gcs prefix provided in request image.source.image_uri field. Valid prefixes must start with 'gs://'.
Looks like only the the sync API accepts public images. The async API must use images in GCS.
https://github.com/googleapis/python-vision/issues/94
I am trying to upload an image to WebDAV Server. The server is hosted on a Linux server, and we can show the images in the WebDAV server using the URL. I am just trying to do a simple upload.
from webdav3.client import Client
options = {
'webdav_hostname': "http://www.link.com/webdav/",
'webdav_login': "user",
'webdav_password': "user"
}
if __name__ == '__main__':
client = Client(options)
client.upload_file(remote_path='blank.php/webdav/converted_images',
local_path="../Downloads/download.jpeg")
I am getting the following error:
Traceback (most recent call last):
File "/home/user/webDav_test/test.py", line 30, in <module>
local_path="../Downloads/download.jpeg")
File "/home/user/.local/lib/python3.7/site-packages/webdav3/client.py", line 70, in _wrapper
res = fn(self, *args, **kw)
File "/home/user/.local/lib/python3.7/site-packages/webdav3/client.py", line 491, in raise RemoteParentNotFound(urn.path())
webdav3.exceptions.RemoteParentNotFound: Remote parent for: /webdav/converted_images not found
Im receiving a type error when I try to stream my firebase real time database. Here is the MRE of my code. Ive been using other features of the module perfectly but for some reason this error keep appearing when I try to stream my data.
from firebase import Firebase
import python_jwt as jwt
from gcloud import storage
from sseclient import SSEClient
from Crypto.PublicKey import RSA
from requests_toolbelt.adapters import appengine
config = {
"apiKey": "*******************************",
"authDomain": "*********************************",
"databaseURL": "*********************************",
"storageBucket": "********************************"
}
pythonfirebase = Firebase(config)
db = pythonfirebase.database()
def stream_handler(message):
print(message["event"]) # put
print(message["path"]) # /-K7yGTTEp7O549EzTYtI
print(message["data"]) # {'title': 'Pyrebase', "body": "etc..."}
my_stream = db.child("placements").stream(stream_handler)
Here is the full traceback
Exception in thread Thread-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/firebase/__init__.py", line 593, in start_stream
self.sse = ClosableSSEClient(self.url, session=self.make_session(), build_headers=self.build_headers)
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/firebase/__init__.py", line 554, in __init__
super(ClosableSSEClient, self).__init__(*args, **kwargs)
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/sseclient.py", line 48, in __init__
self._connect()
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/firebase/__init__.py", line 558, in _connect
super(ClosableSSEClient, self)._connect()
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/sseclient.py", line 56, in _connect
self.resp = requester.get(self.url, stream=True, **self.requests_kwargs)
File "/Users/temitayoadefemi/PycharmProjects/test7/venv/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
TypeError: request() got an unexpected keyword argument 'build_headers'
Would appreciate any help
Try it with Firebase admin sdk instead
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Fetch the service account key JSON file contents
cred = credentials.Certificate(SERVICE_ACCOUNT_FILE)
# Initialize the app with a service account, granting admin privileges
firebase_admin.initialize_app(cred, {
'databaseURL': DATABASE_URL
})
# Get a database reference to our posts
ref = db.reference('messages')
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
node = str(event.path).split('/')[-2] #you can slice the path according to your requirement
property = str(event.path).split('/')[-1]
value = event.data
if (node=='sala'):
#do something
""""""
elif (node=='ventilacion'):
#do something
""""""
else:
""""""
#do something else
# Read the data at the posts reference (this is a blocking operation)
# print(ref.get())
db.reference('/').listen(listener)
We are using the Python 2.7 and the Python Softlayer 3.0.1 package and calling the get_records method on the DNSManager class. This is currently returning an Internal Server error:
2016-05-11T11:18:04.117406199Z Traceback (most recent call last):
2016-05-11T11:18:04.117715505Z File "/opt/**/**/***.py", line 745, in <module>
2016-05-11T11:18:04.117927757Z httpDnsRecords = dnsManager.get_records(httpDomainRecordId, data=dataspace, type="cname")
2016-05-11T11:18:04.118072183Z File "/usr/local/lib/python2.7/dist-packages/SoftLayer/managers/dns.py", line 152, in get_records
2016-05-11T11:18:04.118152705Z filter=_filter.to_dict(),
2016-05-11T11:18:04.118302389Z File "/usr/local/lib/python2.7/dist-packages/SoftLayer/API.py", line 347, in call_handler
2016-05-11T11:18:04.118398852Z return self(name, *args, **kwargs)
2016-05-11T11:18:04.118512777Z File "/usr/local/lib/python2.7/dist-packages/SoftLayer/API.py", line 316, in call
2016-05-11T11:18:04.118632422Z return self.client.call(self.name, name, *args, **kwargs)
2016-05-11T11:18:04.118814604Z File "/usr/local/lib/python2.7/dist-packages/SoftLayer/API.py", line 176, in call
2016-05-11T11:18:04.118907953Z timeout=self.timeout)
2016-05-11T11:18:04.118995360Z File "/usr/local/lib/python2.7/dist-packages/SoftLayer/transports.py", line 64, in make_xml_rpc_api_call
2016-05-11T11:18:04.119096993Z e.faultCode, e.faultString)
2016-05-11T11:18:04.119547899Z SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SOAP-ENV:Server): Internal Error
The httpDomainRecordId is the Id for the domain obtained from softlayer and dataspace is the string 'uk'.
Does anyone know why this would be returning an Internal Error from the server?
Likely the error is due to the response contains a big amount of data, this error is documented here, so you can try:
1.- Increase the timeout in the client.
2.- Add more filters in your request to limmit the result, currently your using datqa and type try adding host or ttl
3.- you can try using limits, but the manager does not provide that option. so you need to use API calls e.g.
import SoftLayer
client = SoftLayer.Client()
zoneId = 12345
objectMask = "id,expire,domainId,host,minimum,refresh,retry, mxPriority,ttl,type,data,responsiblePerson"
result = client['Dns_Domain'].getResourceRecords(id=zoneId, mask=objectMask, limit=200, offset=0)
print (result)