Python PermissionError uploading to an Azure Datalake folder - python

I am trying to upload a file to azure datalake using python script.
I am able to download a file from the datalake, but the uploading raise a permission error, whereas i checked all permissions at all levels (Read Write Execute and the option for the decendants).
## works fine
multithread.ADLDownloader(adls, lpath='C:\\Users\\User1\\file1.txt', rpath='/Test/', nthreads=64, overwrite=True,
buffersize=4194304,
blocksize=4194304)
## Raise error
multithread.ADLUploader(adls, rpath='/Test', lpath='C:\\Users\\User1\\HC',
nthreads=64 , chunksize=268435456, buffersize=4194304, blocksize=4194304, client=None, run=True,
overwrite=False, verbose=True)
the error:
File "C:\Users\Python37-32\test_azure.py", line 64, in <module>
overwrite=False, verbose=True)
File "C:\Users\Python37-32\lib\site-packages\azure\datalake\store\multithread.py", line 442, in __init__
self.run()
File "C:\Users\Python37-32\lib\site-packages\azure\datalake\store\multithread.py", line 548, in run
self.client.run(nthreads, monitor)
File "C:\Users\Python37-32\lib\site-packages\azure\datalake\store\transfer.py", line 525, in run
raise DatalakeIncompleteTransferException('One more more exceptions occured during transfer, resulting in an incomplete transfer. \n\n List of exceptions and errors:\n {}'.format('\n'.join(error_list)))
azure.datalake.store.exceptions.DatalakeIncompleteTransferException: One more more exceptions occured during transfer, resulting in an incomplete transfer.
List of exceptions and errors:
C:\Users\User1\HC\AC.TXT -> \Test\AC.TXT, chunk \Test\AC.TXT 0: errored, "PermissionError('/Test/AC.TXT')"
Does somebody have an idea of the problem ?

The azure account I am using has got all the privileges on the Datalake, but the Azure Application didn't.

Related

RuntimeError: Pipeline kafka-stream-from-confluent failed in state FAILED: org.rocksdb.RocksDBException: While open directory:/db: Too many open files

Some of my apache beam pipelines running on the local flink cluster crash with the error above at the same time. Also, they behave unexpectedly. All of them read and write to kafka topics, but they don't process data continuously as expected, but rather wait, process a few thousand, and crash.
Do you have any idea, what caused such behavior?
ERROR:root:org.rocksdb.RocksDBException: While open a file for appending: /var/folders/4l/yc2_3ncx0_z_89mzc0qrbll80000gn/T/flink-io-7ba42f23-64c7-49cb-96ab-46b732d2287c/job_0fc440f2bcc04398365b6c2bdaad0e0a_op_ExecutableStageDoFnOperator_4de6aa5756261f95d01b75bc435f50e1__1_1__uuid_7710e22f-4b45-4f5d-9005-410d0579291f/db/007077.sst: Too many open files
Traceback (most recent call last):
File "biz_rule_contacts.py", line 408, in <module>
main()
File "biz_rule_contacts.py", line 405, in main
'Write to CONTACTS_OUT topic' >> WriteToKafka(producer_config=config_read_write, topic='D_CONTACTS_OUT')
File "/Users/viktorhrtanek/Dev/beam/lib/python3.7/site-packages/apache_beam/pipeline.py", line 598, in __exit__
self.result.wait_until_finish()
File "/Users/viktorhrtanek/Dev/beam/lib/python3.7/site-packages/apache_beam/runners/portability/portable_runner.py", line 607, in wait_until_finish
raise self._runtime_exception
RuntimeError: Pipeline kafka-stream-from-confluent_1ef2830c-8a1a-4e15-8222-36808e41c81c failed in state FAILED: org.rocksdb.RocksDBException: While open a file for appending: /var/folders/4l/yc2_3ncx0_z_89mzc0qrbll80000gn/T/flink-io-7ba42f23-64c7-49cb-96ab-46b732d2287c/job_0fc440f2bcc04398365b6c2bdaad0e0a_op_ExecutableStageDoFnOperator_4de6aa5756261f95d01b75bc435f50e1__1_1__uuid_7710e22f-4b45-4f5d-9005-410d0579291f/db/007077.sst: Too many open files
I found a similar issue here, I increased a number of opened files in my OS, but since I am using managed kafka from confluent cloud, I am not sure if I can increase ulimit there.
Kafka Streaming - Caused by: org.rocksdb.RocksDBException - Too many open files

How to use Flow-guided video completion (FGVC)?

How can you use Flow-guided video completion (FGVC) for a personal file?
Operation is not specified on the various official sources for those who would like to use the FGVC freely from the Google Colab platform (https://colab.research.google.com/drive/1pb6FjWdwq_q445rG2NP0dubw7LKNUkqc?usp=sharing).
I, as a test, I uploaded a video to Google Drive (of the same account from which I was running Google Colab's scripts) divided into various frames, located in a .zip folder called "demo1.zip".
I then ran the first script in the sequence, called "Prepare environment", I activated video sharing via public link and I copied the link in the second script (immediately after the first word "wget ​​–quiet") and in the first entry "rm" I entered "demo1.zip", in relation to the name of my video file.
I proceeded like this after reading the description just above the run button of the second script: "We show a demo on a 15-frames sequence. To process your own data, simply upload the sequence and specify the path."
Running the second script as well, this is successful and my video file is loaded.
I then go to the fourth (and last) script which consists in processing the content through an AI to obtain the final product with an enlarged Field Of View (FOV => larger aspect ratio).
After a few seconds of running, the process ends with an error:
File "video_completion.py", line 613, in <module>
main (args)
File "video_completion.py", line 576, in main
video_completion_sphere (args)
File "video_completion.py", line 383, in video_completion_sphere
RAFT_model = initialize_RAFT (args)
File "video_completion.py", line 78, in initialize_RAFT
model.load_state_dict (torch.load (args.model))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 594, in load
return _load (opened_zipfile, map_location, pickle_module, ** pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 853, in _load
result = unpickler.load ()
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 845, in persistent_load
load_tensor (data_type, size, key, _maybe_decode_ascii (location))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 834, in load_tensor
loaded_storages [key] = restore_location (storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 175, in default_restore_location
result = fn (storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device (location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError ('Attempting to deserialize object on a CUDA'
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available () is False. If you are running on a CPU-only machine, please use torch.load with map_location = torch.device ('cpu') to map your storages to the CPU.
What's wrong with execution? Is there a way to fix and allow to finish the process with Google Colab?
Let me know!

How to access Google Cloud Storage Bucket from AI Platform job

My Google AI Platform / ML Engine training job doesn't seem to have access to the training file I put into a Google Cloud Storage bucket.
Google's AI Platform / ML Engine requires you store training data files in one of their Cloud Storage buckets. Accessing locally from CLI works fine. However, when I send a training job (after ensuring the data is in the appropriate location in my Cloud Storage bucket), I get an error seeming to be due to no access to the bucket Link URL.
The error is from trying to read what looks to me like the contents of a web page that Google served up saying "Hey, you don't have access to this." I see this gaia.loginAutoRedirect.start(5000, and a URL with this flag at the end: noautologin=true.
I know permissions between AI Platform and Cloud Storage are a thing, but both are under the same project. The walkthroughs I'm using at very least imply that no further action is required if under the same project.
I am assuming I need to use the Link URL provided in the bucket Overview tab. Tried the Link for gsutil but the python (from Google's CloudML Samples repo) was upset about using gs://.
I think Google's examples are proving insufficient since their example data is from a public URL rather than a private Cloud Storage bucket.
Ultimately, the error message I get is a Python error. But like I said, this is preceded by a bunch of gross INFO logs of HTML/CSS/JS from Google saying I don't have permission to get the file I'm trying to get. These logs are actually just because I added a print statement to the util.py file as well - right before read_csv() on the train file. (So the Python parse error is due to trying to parse HTML as a CSV).
...
INFO g("gaia.loginAutoRedirect.stop",function(){var b=n;b.b=!0;b.a&&(clearInterval(b.a),b.a=null)});
INFO gaia.loginAutoRedirect.start(5000,
INFO 'https:\x2F\x2Faccounts.google.com\x2FServiceLogin?continue=https%3A%2F%2Fstorage.cloud.google.com%2F<BUCKET_NAME>%2Fdata%2F%2Ftrain.csv\x26followup=https%3A%2F%2Fstorage.cloud.google.com%2F<BUCKET_NAME>%2Fdata%2F%2Ftrain.csv\x26service=cds\x26passive=1209600\x26noautologin=true',
ERROR Command '['python', '-m', u'trainer.task', u'--train-files', u'gs://<BUCKET_NAME>/data/train.csv', u'--eval-files', u'gs://<BUCKET_NAME>/data/test.csv', u'--batch-pct', u'0.2', u'--num-epochs', u'1000', u'--verbosity', u'DEBUG', '--job-dir', u'gs://<BUCKET_NAME>/predictor']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 137, in <module>
train_and_evaluate(args)
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 80, in train_and_evaluate
train_x, train_y, eval_x, eval_y = util.load_data()
File "/root/.local/lib/python2.7/site-packages/trainer/util.py", line 168, in load_data
train_df = pd.read_csv(training_file_path, header=0, names=_CSV_COLUMNS, na_values='?')
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 446, in _read
data = parser.read(nrows)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1036, in read
ret = self._engine.read(nrows)
File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1848, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 876, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 945, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 932, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2112, in pandas._libs.parsers.raise_parser_error
ParserError: Error tokenizing data. C error: Expected 5 fields in line 205, saw 961
To get the data, I'm more or less trying to mimic this:
https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras/trainer/util.py
Various ways I have tried to address my bucket in my copy of util.py:
https://console.cloud.google.com/storage/browser/<BUCKET_NAME>/data (think this was the "Link URL" back in May)
https://storage.cloud.google.com/<BUCKET_NAME>/data (this is the "Link URL" now - in July)
gs://<BUCKET_NAME>/data (this is the URI - which gives a different error about not liking gs as a url type)
Transferring the answer from a comment above:
Looks like the URL approach requires cookie based authentication if it's not a public object. Instead of using a URL, I would suggest using tf.gfile with a gs:// path, as is used in the Keras sample. If you need to download the file from GCS in a separate step, you can use the GCS client library.

Import MongoDB data to Azure ML Studio from Python Script

Currently in Azure ML's while executing python script, with following code. (Python 2.7.11)
In which results obtained from the mongoDB are trying to return in DataFrame using pyMongo.
I got an error like ::
"C:\pyhome\lib\site-packages\pymongo\topology.py", line 97, in select_servers
self._error_message(selector))
ServerSelectionTimeoutError: ... ('The write operation timed out',)
Please let me know if you know about the cause of the error and what to improve.
My Source code :
import pymongo as m
import pandas as pd
def azureml_main(dataframe1 = None, dataframe2 = None):
uri = "mongodb://xxxxx:yyyyyyyyyyyyyyy#zzz.mongodb.net:xxxxx/?ssl=true&replicaSet=globaldb"
client = m.MongoClient(uri,connect=False)
db = client['dbName']
coll = db['colectionName']
cursor = coll.find()
df = pd.DataFrame(list(cursor))
return df,
Error Details:
Error 0085: The following error occurred during script evaluation, please view the output log for more information:
---------- Start of error message from Python interpreter ----------
Caught exception while executing function: Traceback (most recent call last):
File "C:\server\invokepy.py", line 199, in batch
odfs = mod.azureml_main(*idfs)
File "C:\temp\55a174d8dc584942908423ebc0bac110.py", line 32, in azureml_main
result = pd.DataFrame(list(cursor))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 977, in next
if len(self.__data) or self._refresh():
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 902, in _refresh
self.__read_preference))
File "C:\pyhome\lib\site-packages\pymongo\cursor.py", line 813, in __send_message
**kwargs)
File "C:\pyhome\lib\site-packages\pymongo\mongo_client.py", line 728, in _send_message_with_response
server = topology.select_server(selector)
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 121, in select_server
address))
File "C:\pyhome\lib\site-packages\pymongo\topology.py", line 97, in select_servers
self._error_message(selector))
ServerSelectionTimeoutError: xxxxx-xxx.mongodb.net:xxxxx: ('The write operation timed out',)
Process returned with non-zero exit code 1
As I known, there are a limitation of Execute Python Scripts which will cause this issue, please refer to the section Limitations to know it, as below.
Limitations
The Execute Python Script currently has the following limitations:
Sandboxed execution. The Python runtime is currently sandboxed and, as a result, does not allow access to the network or to the local file system in a persistent manner. All files saved locally are isolated and deleted once the module finishes. The Python code cannot access most directories on the machine it runs on, the exception being the current directory and its subdirectories.
Due to the reason above, you can not directly import the data from Azure Cosmos DB online via pymongo driver in Execute Python Script module. But you can use Import Data module with the connection and parameters information of your Azure Cosmos DB and connect its output to the input of Execute Python Script to get the data, as the figure below.
For more information to import data online, please refer to the section Import from online data sources of the offical document Import your training data into Azure Machine Learning Studio from various data sources.

TransformationError Reasons get_serving_url / Images API

I'm running into the following error using get_serving_url in order to serve images from my bucket.
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 1087, in synctasklet_wrapper
return taskletfunc(*args, **kwds).get_result()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 1057, in tasklet_wrapper
result = func(*args, **kwds)
File "/base/data/home/apps/e~tileserve20171207t210519.406056180994857717/blob_upload.py", line 70, in post
bf.put_async()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3473, in _put_async
self._pre_put_hook()
File "/base/data/home/apps/e~tileserve/20171207t210519.406056180994857717/blob_files.py", line 124, in _pre_put_hook
print images.get_serving_url(None, filename='/gs' + '/tileserve.appspot.com/user2test4test4RGB20170927.tif')
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/images/__init__.py", line 1868, in get_serving_url
return rpc.get_result()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/images/__init__.py", line 1972, in get_serving_url_hook
raise _ToImagesError(e, readable_blob_key)
TransformationError
When I upload an image to my bucket then it works but when I create an image through processing which should be exposed through get_serving_url I get TransformationError.
I tried two variants for serving images:
test = blobstore.create_gs_key('/gs' + '/path2object')
images.get_serving_url(test, secure_url=True)
images.get_serving_url(None, filename='/gs' + '/' + <bucket name>+ '/'+ <object name>)
I also set the bucket object ACM permissions and the IAM App Engine app default service account permissions (storage admin). Both variants changed nothing but are important in order to access objects of a bucket.
Did somebody experience this issue ? What could be the error? I do not understand why it works when I upload an image but not for images which are generated through processing.
The traceback suggests you may be trying to call images.get_serving_url() while an async operation may still be in progress.
If that op is actually saving the transformed image in GCS then it could explain the failure: get_serving_url() includes a check of the file being a valid image, which would fail with TransformationError if the file is not yet saved.
If so - you can fix the problem by either:
switching to the sync version of saving the image
waiting until saving the image completes (in case you have some other stuff to do in the meantime) - get the result of the sync call before invoking get_serving_url()
repeatedly trying get_serving_url() while catching TransformationError until it no longer fails that way. This is a bit risky as it can end up in an infinite loop if TransformationError is raised for reasons other than simply saving the image being incomplete.
The issue is not with the get_serving_url() method. It is how you are accessing the objects from google cloud storage bucket.
I switched the access control on my bucket from uniform to fine grained which fixed the problem for me.

Categories

Resources