I am trying to upload an image to WebDAV Server. The server is hosted on a Linux server, and we can show the images in the WebDAV server using the URL. I am just trying to do a simple upload.
from webdav3.client import Client
options = {
'webdav_hostname': "http://www.link.com/webdav/",
'webdav_login': "user",
'webdav_password': "user"
}
if __name__ == '__main__':
client = Client(options)
client.upload_file(remote_path='blank.php/webdav/converted_images',
local_path="../Downloads/download.jpeg")
I am getting the following error:
Traceback (most recent call last):
File "/home/user/webDav_test/test.py", line 30, in <module>
local_path="../Downloads/download.jpeg")
File "/home/user/.local/lib/python3.7/site-packages/webdav3/client.py", line 70, in _wrapper
res = fn(self, *args, **kw)
File "/home/user/.local/lib/python3.7/site-packages/webdav3/client.py", line 491, in raise RemoteParentNotFound(urn.path())
webdav3.exceptions.RemoteParentNotFound: Remote parent for: /webdav/converted_images not found
Related
So I have very strange problem. I am working on this application that is using GitLab instance (version: 15.5.1) on my server in background with CentOS. I am using API, python-gitlab library and my Flask app. I was was making little change to my function which was adding bas64 decoder before this everything worked just fine. So after this change I send one request with Postman to see if it works.
And this is where the problem starts functions works it decodes base64 and sends it to server where it is saved in repo. BUT serevr response with 500 INTERNAL SERVER ERROR in Postman.
ERROR FROM TERMINAL:
[2023-02-17 10:07:45,597] ERROR in app: Exception on /data [POST]
Traceback (most recent call last):
File "exceptions.py", line 337, in wrapped_f
return f(*args, **kwargs)
File "mixins.py", line 246, in list
obj = self.gitlab.http_list(path, **data)
File "client.py", line 939, in http_list
return list(GitlabList(self, url, query_data, **kwargs))
File "client.py", line 1231, in __next__
return self.next()
File "client.py", line 1242, in next
self._query(self._next_url, **self._kwargs)
File "/client.py", line 1154, in _query
result = self._gl.http_request("get", url, query_data=query_data, **kwargs)
File "client.py", line 798, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 404: HERE STARTS LONG HTML FILE
MY FUNCTION: type and text are JSON parameter JSON file is below this code
def pushFile(type, text):
decoded_text = base64.b64decode(text)
project_id = config.REPO_ID
project = gl.projects.get(project_id)
#RANDOM ID
uni_id = uuid.uuid1()
f = project.files.create({'file_path': f'{compared_type}'+'_RULES/'+f'{type}'+'_'+f'{uni_id}'+'.txt',
'branch': 'main',
'content': f'{decoded_text}',
'author_email': 'test#example.com',
'author_name': 'yourname',
'commit_message': 'Create testfile'})
JSON:
{
"type" :"radar",
"text" : "dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdA=="
}
So I tried to:
Restart GitLab instance
Delete bas64 decoder
But nothing helped and I still get 500 error but files are still uploaded. Does someone have any idea what might be wrong?
I have four files main.py, jobs.zip, libs.zip & params.yaml and these I have stored on Azure Storage Account Container.
Now I have this code which is making a payload and will try to run a spark job using that payload. And that payload will be having the location link of these 4 files.
hook = AzureSynapseHook(
azure_synapse_conn_id=self.azure_synapse_conn_id, spark_pool=self.spark_pool
)
payload = SparkBatchJobOptions(
name=f"{self.job_name}_{self.app_id}",
file=f"abfss://{Variable.get('ARTIFACT_BUCKET')}#{Variable.get('ARTIFACT_ACCOUNT')}.dfs.core.windows.net/{self.env}/{SPARK_DIR}/main.py",
arguments=self.job_args,
python_files=[
f"abfss://{Variable.get('ARTIFACT_BUCKET')}#{Variable.get('ARTIFACT_ACCOUNT')}.dfs.core.windows.net/{self.env}/{SPARK_DIR}/jobs.zip",
f"abfss://{Variable.get('ARTIFACT_BUCKET')}#{Variable.get('ARTIFACT_ACCOUNT')}.dfs.core.windows.net/{self.env}/{SPARK_DIR}/libs.zip",
],
files=[
f"abfss://{Variable.get('ARTIFACT_BUCKET')}#{Variable.get('ARTIFACT_ACCOUNT')}.dfs.core.windows.net/{self.env}/{SPARK_DIR}/params.yaml"
],
)
self.log.info("Executing the Synapse spark job.")
response = hook.run_spark_job(payload=payload)
I have checked the location link that is correct but when I run this on airflow it throws an error related to the payload which I think it is trying to say that it is not able to grab the links.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/azure/core/pipeline/transport/_base.py", line 579, in format_url
base = self._base_url.format(**kwargs).rstrip("/")
KeyError: 'endpoint'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/airflow/dags/operators/spark/__init__.py", line 36, in execute
return self.executor.execute()
File "/usr/local/airflow/dags/operators/spark/azure.py", line 60, in execute
response = hook.run_spark_job(payload=payload)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/microsoft/azure/hooks/synapse.py", line 144, in run_spark_job
job = self.get_conn().spark_batch.create_spark_batch_job(payload)
File "/usr/local/lib/python3.9/site-packages/azure/synapse/spark/operations/_spark_batch_operations.py", line 163, in create_spark_batch_job
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
File "/usr/local/lib/python3.9/site-packages/azure/core/pipeline/transport/_base.py", line 659, in post
request = self._request(
File "/usr/local/lib/python3.9/site-packages/azure/core/pipeline/transport/_base.py", line 535, in _request
request = HttpRequest(method, self.format_url(url))
File "/usr/local/lib/python3.9/site-packages/azure/core/pipeline/transport/_base.py", line 582, in format_url
raise ValueError(err_msg.format(key.args[0]))
ValueError: The value provided for the url part endpoint was incorrect, and resulted in an invalid url
I also want to know the difference of abfss and wasbs and where should i upload my files so that the code will be able to grab the links ?
Maybe I am uploading the files at wrong place.
You have something wrong in the connection self.azure_synapse_conn_id, where the host (Synapse Workspace URL) is not valid, here is an example of the connection:
Connection(
conn_id=DEFAULT_CONNECTION_CLIENT_SECRET,
conn_type="azure_synapse",
host="https://testsynapse.dev.azuresynapse.net",
login="clientId",
password="clientSecret",
extra=json.dumps(
{
"extra__azure_synapse__tenantId": "tenantId",
"extra__azure_synapse__subscriptionId": "subscriptionId",
}
),
)
For the difference between abfss and wasbs, here is a detailed answer about the topic.
I am using python docker SDK to create multiple chrome containers. Below is my script
Here first I have pull docker image then try to create 2 containers out it. But it is failing with an error message that ports are already in use but I am incrementing the ports value as per container count.
import docker, sys
class CreateContainer:
def __init__(self):
self.client = CreateContainer.create_client()
#staticmethod
def create_client():
client = docker.from_env()
return client
def pull_image(self, image_name):
image = client.images.pull(image_name)
print(image.name)
def create_containers(self, image, container_name, expose_port, countainer_count=1):
container = self.client.containers.run(
image,
name=container_name,
hostname=container_name,
ports=expose_port,
detach=True
)
for line in container.logs():
print(line)
return container
if __name__ == '__main__':
threads = int(sys.argv[1])
c_obj = CreateContainer()
for i in range(1, threads+1):
c_obj.create_containers("selenium/standalone-chrome", "Chrome_{0}".format(i), expose_port={5550+i:4444})
------Run----------
python test.py 2
------error-----
Traceback (most recent call last):
File "C:\Program Files\Python39\lib\site-packages\docker\api\client.py", line 268, in _raise_for_status
response.raise_for_status()
File "C:\Program Files\Python39\lib\site-packages\requests\models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localnpipe/v1.40/containers/260894dbaec6946e5f31fdbfb5307182d2f621c12a38f328f6efac58df58854d/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Desktop\FreeLance\Utility\src\docker_engine\create_container.py", line 53, in <module>
c_obj.create_containers("selenium/standalone-chrome", "Chrome_{0}".format(i), expose_port={5550+i:4444})
File "C:\Users\Desktop\FreeLance\Utility\src\docker_engine\create_container.py", line 20, in create_containers
container = self.client.containers.run(
File "C:\Program Files\Python39\lib\site-packages\docker\models\containers.py", line 818, in run
container.start()
File "C:\Program Files\Python39\lib\site-packages\docker\models\containers.py", line 404, in start
return self.client.api.start(self.id, **kwargs)
File "C:\Program Files\Python39\lib\site-packages\docker\utils\decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "C:\Program Files\Python39\lib\site-packages\docker\api\container.py", line 1111, in start
self._raise_for_status(res)
File "C:\Program Files\Python39\lib\site-packages\docker\api\client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Program Files\Python39\lib\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error for http+docker://localnpipe/v1.40/containers/260894dbaec6946e5f31fdbfb5307182d2f621c12a38f328f6efac58df58854d/start: Internal Server Error ("driver failed programming external connectivity on endpoint Chrome_2 (c0c77528743a1e3153201565b2cb520243b66adbb903bb69e91a00c4399aca62): Bind for 0.0.0.0:4444 failed: port is already allocated")
While proving port the syntax is container_port:host_port
c_obj.create_containers("selenium/standalone-chrome", "Chrome_{0}".format(i), expose_port={4444:5550+i})
documentation : https://docker-py.readthedocs.io/en/stable/containers.html
I create credentials (tried both write and manager) on the web interface and include {"HMAC":true}
I have used these credentials for more basic actions such as put_object and upload_file successfully.
However, I cannot get generate_presigned_post to work. It generates the following error:
ibm_botocore.exceptions.UnsupportedSignatureVersionError: Signature
version is not supported: oauth-presign-post
I run the following code,
import ibm_boto3
from ibm_botocore.client import Config
class COSPresignedURL():
def __init__(self, config):
cfg = Config(signature_version='oauth', s3={"payload_signing_enabled":True})
self.cos = ibm_boto3.client(
's3',
ibm_api_key_id=config['api_key'],
ibm_service_instance_id=config['instance_id'],
ibm_auth_endpoint=config['auth_endpoint'],
endpoint_url=config['url_endpoint'],
config=cfg)
def generate(self, bucket, key, Fields=None, Conditions=None, ExpiresIn=300):
return self.cos.generate_presigned_post(bucket, key, Fields, Conditions, ExpiresIn)
def main():
config = {
"api_key" : "VALUE OF apikey FROM CLOUD CREDENTIALS",
"instance_id" : "VALUE OF resource_instance_id FROM CLOUD CREDENTIALS",
"auth_endpoint" : "https://iam.cloud.ibm.com/identity/token",
"url_endpoint" : "https://s3.eu-de.cloud-object-storage.appdomain.cloud"
}
bucket = 'somebucket'
poster = COSPresignedURL(config)
uri = poster.generate(bucket, 'somekey')
print(f'{uri}')
if __name__ == '__main__':
main()
which generates the following error in full,
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 149, in sign
auth = self.get_auth_instance(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 222, in get_auth_instance
signature_version=signature_version)
ibm_botocore.exceptions.UnknownSignatureVersionError: Unknown Signature Version: oauth-presign-post.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tmp.py", line 35, in <module>
main()
File "tmp.py", line 30, in main
uri = poster.generate(bucket, 'somekey')
File "tmp.py", line 16, in generate
return self.cos.generate_presigned_post(bucket, key, Fields, Conditions, ExpiresIn)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 714, in generate_presigned_post
expires_in=expires_in)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 526, in generate_presigned_post
'PutObject', request, region_name, 'presign-post')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/ibm_botocore/signers.py", line 153, in sign
signature_version=signature_version)
ibm_botocore.exceptions.UnsupportedSignatureVersionError: Signature version is not supported: oauth-presign-post
I have that working as following. When I have created the client and connected to the COS endpoint, my final call looks like this:
theURL=cos.generate_presigned_url('get_object',
Params = {'Bucket': buckets[0],
'Key': objects[0]
},
ExpiresIn = 100)
It seems that I use that function with different parameters. I constructed the cos object as shown here:
cos = ibm_boto3.client('s3',
config["apikey"],
endpoint_url='https://'+cos_host,
aws_access_key_id=config["cos_hmac_keys"["access_key_id"],
aws_secret_access_key=config["cos_hmac_keys"]["secret_access_key"])
As you can see, I passed the HMAC details.
I am running Neo4j 2.2.1 in ubuntu Amazon EC2 instance. When I am trying to connect through python using py2neo-2.0.7, I am getting following error :
py2neo.packages.httpstream.http.SocketError: Operation not permitted
I am able to access the web-interface through http://52.10.**.***:7474/browser/
CODE :-
from py2neo import Graph, watch, Node, Relationship
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
print url_graph_conn
my_conn = Graph(url_graph_conn)
babynames = my_conn.find("BabyName")
for babyname in babynames:
print 2
Error message :-
https://neo4j:password#52.10.**.***:7474/db/data/
Traceback (most recent call last):
File "C:\Users\rharoon002\eclipse_workspace\peace\peace\core\graphconnection.py", line 39, in <module>
for babyname in babynames:
File "C:\Python27\lib\site-packages\py2neo\core.py", line 770, in find
response = self.cypher.post(statement, parameters)
File "C:\Python27\lib\site-packages\py2neo\core.py", line 667, in cypher
metadata = self.resource.metadata
File "C:\Python27\lib\site-packages\py2neo\core.py", line 213, in metadata
self.get()
File "C:\Python27\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 433, in submit
http, rs = submit(self.method, uri, self.body, self.headers)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 362, in submit
raise SocketError(code, description, host_port=uri.host_port)
py2neo.packages.httpstream.http.SocketError: Operation not permitted
You are trying to access neo4j via https on the standard port for http (7474):
url_graph_conn = "https://neo4j:password#52.10.**.***:7474/db/data/"
The standard port for a https connection is 7473. Try:
url_graph_conn = "https://neo4j:password#52.10.**.***:7473/db/data/"
And make sure you can access the web interface via https:
https://52.10.**.***:7473/browser/
You can change/see the port settings in your neo4j-server.properties file.