I'm attempting to upload a text file to Dropbox using this code:
def uploadFile(file):
f = open('logs/%s.txt' % file)
response = client.put_file('/%s.txt' % file, f)
print "Uploaded log file %s" % file
Connecting to dropbox works perfectly fine, it's just when I upload files I recieve this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\dropbox_python_sdk-1.5.1-py2.7.egg\dropbox
\client.py", line 352, in put_file
return self.rest_client.PUT(url, file_obj, headers)
File "C:\Python27\lib\site-packages\dropbox_python_sdk-1.5.1-py2.7.egg\dropbox
\rest.py", line 265, in PUT
return cls.IMPL.PUT(*n, **kw)
File "C:\Python27\lib\site-packages\dropbox_python_sdk-1.5.1-py2.7.egg\dropbox
\rest.py", line 211, in PUT
return self.request("PUT", url, body=body, headers=headers, raw_response=raw
_response)
File "C:\Python27\lib\site-packages\dropbox_python_sdk-1.5.1-py2.7.egg\dropbox
\rest.py", line 174, in request
raise util.AnalyzeFileObjBug(clen, bytes_read)
dropbox.util.AnalyzeFileObjBug:
Expected file object to have 18 bytes, instead we read 17 bytes.
File size detection may have failed (see dropbox.util.AnalyzeFileObj)
Google has given me no help with this one.
Sounds like you are a victim of newline unification. The file object reports a file size of 18 bytes ("abcdefghijklmnop\r\n") but you read only 17 bytes ("abcdefghijklmnop\n").
Open the file in binary mode to avoid this:
f = open('logs/%s.txt' % file, 'rb')
The default is to use text mode, which may convert '\n' characters to a platform-specific representation on writing and back on reading.
Related
I'am trying to print a QR code in the XmlReceipt (ESCPOS printed receipt) template but I'm getting this error printed in the actual receipt:
Traceback (most recent call last):
File "/home/pi/odoo/addons/hw_escpos/controllers/main.py", line 169, in run
printer.receipt(data)
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 717, in receipt
raise e
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 704, in receipt
print_elem(stylestack,serializer,root)
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 594, in print_elem
print_elem(stylestack,serializer,child)
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 594, in print_elem
print_elem(stylestack,serializer,child)
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 680, in print_elem
self.print_base64_image(bytes(elem.attrib['src'], 'utf-8'))
File "/home/pi/odoo/addons/hw_escpos/escpos/escpos.py", line 445, in print_base64_image
img_rgba = Image.open(f)
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2687, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x6b132f00>
I'm copying the exact same solution as Saudi Arabia module https://github.com/odoo/odoo/blob/14.0/addons/l10n_sa_pos/, and I already check that is rendering correctly at this step https://github.com/odoo/odoo/blob/12.0/addons/point_of_sale/static/src/js/screens.js#L1653
print_xml: function() {
var receipt = QWeb.render('XmlReceipt', this.get_receipt_render_env());
this.pos.proxy.print_receipt(receipt);
this.pos.get_order()._printed = true;
},
the image is sent as base64 svg+xml format and I already installed the iotboxv21_04 version. So I suspect that probably the pillow version 5.4.1 (which I already check that comes with that IoTbox version) can't open the SVG file type that was sent. Should I send it in PNG? how can I achieve that?
My application needs to read the huge compressed (.gz) file from S3 bucket. Every time we need to read smaller portion of data by giving startPos byte and endPos byte.
`def read_chunks(bucketname, key, startPos, endPos):
s3 = boto3.client('s3')
start_time = time.time()
response = s3.get_object(Bucket=bucketname, Key=key, Range="bytes=%d-%d" % (startPos, endPos))
response1 = s3.get_object(Bucket=bucketname, Key=key, Range="bytes=%d-%d" % (startPos, endPos))
print (response['ContentLength'])
print ('\n')
n = response['Body'].read()
print (n)
decompressed = gzip.decompress(n)
print(decompressed)
`
I am getting the below error
Traceback (most recent call last): File "aws_s3_file.py", line 106, in <module> read_chunks(bucketname, key, startPos, endPos) File "aws_s3_file.py", line 73, in read_chunks decompressed = gzip.decompress(n) File "/opt/pym32/lib/python3.8/gzip.py", line 551, in decompress return f.read() File "/opt/pym32/lib/python3.8/gzip.py", line 292, in read return self._buffer.read(size) File "/opt/pym32/lib/python3.8/gzip.py", line 479, in read if not self._read_gzip_header(): File "/opt/pym32/lib/python3.8/gzip.py", line 427, in _read_gzip_header raise BadGzipFile('Not a gzipped file (%r)' % magic) gzip.BadGzipFile: Not a gzipped file (b'\x0b\x8d')
How to decompress the smaller portion of data? Please note that the input file is .gz file only. Any help will be greatly appreciated.
I need to get the smaller portion of data from .gz file from S3 bucket and decompress it.
I have tried adding header manually (sample header: b'\x1f\x8b\x08\x00#\xdc8b\x02\xff. Please ignore the mtime bytes). But I ended up with another error. "Compressed file ended before the end-of-stream marker was reached".
I am using python along with airflow and gcp python library. I automated the process of sending files to gcp using airflow dags. The code is as follows :-
for fileid, filename in files_dictionary.items():
if ftp.size(filename) <= int(MAX_FILE_SIZE):
data = BytesIO()
ftp.retrbinary('RETR ' + filename, callback=data.write)
f = client.File(client, fid=fileid)
size = sys.getsizeof(data.read())
// Another option is to use FileIO but not sure how
f.send(data, filename, size) // This method is in another library
The code to trigger the upload is current repo (as soon above) but real upload is done by another dependency which is not in our control. The documentation of that method is
def send(self, fp, filename, file_bytes):
"""Send file to cloud
fp file object
filename is the name of the file.
file_bytes is the size of the file in bytes
"""
data = self.initiate_resumable_upload(self.getFileid())
_, blob = self.get_gcs_blob_and_bucket(data)
# Set attachment filename. Does this work with datasets with folders
original_filename = filename.rsplit(os.sep, 1)[-1]
blob.content_disposition = "attachment;filename=" + original_filename
blob.upload_from_file(fp)
self.finish_resumable_upload(self.getFileid())
I am getting below error
[2020-04-23 09:43:17,239] {{models.py:1788}} ERROR - Stream must be at beginning.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1657, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/operators/python_operator.py", line 103, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.6/site-packages/airflow/operators/python_operator.py", line 108, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/usr/local/airflow/dags/transfer_data.py", line 241, in upload
f.send(data, filename, size)
File "/usr/local/lib/python3.6/site-packages/client/utils.py", line 53, in wrapper_timer
value = func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/client/client.py", line 518, in send
blob.upload_from_file(fp)
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1158, in upload_from_file
client, file_obj, content_type, size, num_retries, predefined_acl
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1068, in _do_upload
client, stream, content_type, size, num_retries, predefined_acl
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1011, in _do_resumable_upload
predefined_acl=predefined_acl,
File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 960, in _initiate_resumable_upload
stream_final=False,
File "/usr/local/lib/python3.6/site-packages/google/resumable_media/requests/upload.py", line 343, in initiate
stream_final=stream_final,
File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_upload.py", line 415, in _prepare_initiate_request
raise ValueError(u"Stream must be at beginning.")
ValueError: Stream must be at beginning.
The upload_from_file function has a parameter that handles the seek(0) call for you:
I would modify your upload_from_file call to:
blob.upload_from_file(file_obj=fp, rewind=True)
That should do the trick, and you don't need to include the additional seek()
When reading a binary file, you can navigate through it using seek operations. In other words, you can move the reference from the beginning of the file to any other position. The error ValueError: Stream must be at beginning. is basically saying: "your reference is not pointed to the beginning of the stream and it must be"
Given that, you need to set your reference back to the beginning of the stream. You can do that using the function seek.
In your case, you would do something like:
data = BytesIO()
ftp.retrbinary('RETR ' + filename, callback=data.write)
f = client.File(client, fid=fileid)
size = sys.getsizeof(data.read())
data.seek(0)
f.send(data, filename, size)
I can't find a way of reading the Minecraft world files in a way that i could use in python
I've looked around the internet but can find no tutorials and only a few libraries that claim that they can do this but never actually work
from nbt import *
nbtfile = nbt.NBTFile("r.0.0.mca",'rb')
I expected this to work but instead I got errors about the file not being compressed or something of the sort
Full error:
Traceback (most recent call last):
File "C:\Users\rober\Desktop\MinePy\MinecraftWorldReader.py", line 2, in <module>
nbtfile = nbt.NBTFile("r.0.0.mca",'rb')
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 628, in __init__
self.parse_file()
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 652, in parse_file
type = TAG_Byte(buffer=self.file)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 99, in __init__
self._parse_buffer(buffer)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 105, in _parse_buffer
self.value = self.fmt.unpack(buffer.read(self.fmt.size))[0]
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 276, in read
return self._buffer.read(size)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 463, in read
if not self._read_gzip_header():
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 411, in _read_gzip_header
raise OSError('Not a gzipped file (%r)' % magic)
OSError: Not a gzipped file (b'\x00\x00')
Use anvil parser. (Install with pip install anvil-parser)
Reading
import anvil
region = anvil.Region.from_file('r.0.0.mca')
# You can also provide the region file name instead of the object
chunk = anvil.Chunk.from_region(region, 0, 0)
# If `section` is not provided, will get it from the y coords
# and assume it's global
block = chunk.get_block(0, 0, 0)
print(block) # <Block(minecraft:air)>
print(block.id) # air
print(block.properties) # {}
https://pypi.org/project/anvil-parser/
According to this page, the .mca files is not totally kind of of NBT file. It begins with an 8KiB header which includes the offsets of chunks in the region file itself and the timestamps for the last updates of those chunks.
I recommend you to see the offical announcement and this page for more information.
I am consuming a webservice (written in java) - that basically returns a byte[] array (the SOAP equivalent is base64 encoded binary data).
I am using the python suds library and the following code works for me on my mac (and on cygwin under windows), but the decoding does not work on vanilla windows (python 2.6.5). I am primarily a java developer so any help will be really helpful.
from suds.client import Client
import base64,os,shutil,tarfile,StringIO
u = "user"
p = "password"
url = "https://xxxx/?wsdl"
client = Client(url, username=u, password=p)
bin = client.service.getTargz("test")
f = open("tools.tar.gz", "w")
f.write(base64.b64decode(bin.encode('ASCII')))
f.close()
print "finished writing"
tarfile.open("tools.tar.gz").extractall()
Works great on a mac - but on windows gives me this error:
C:\client>python client.py
xml
Getting the sysprep file from the webservice
finished writing
Traceback (most recent call last):
File "client.py", line 28, in
tarfile.open("tools.tar.gz").extractall()
File "C:\Python26\lib\tarfile.py", line 1653, in open
return func(name, "r", fileobj, **kwargs)
File "C:\Python26\lib\tarfile.py", line 1720, in gzopen
**kwargs)
File "C:\Python26\lib\tarfile.py", line 1698, in taropen
return cls(name, mode, fileobj, **kwargs)
File "C:\Python26\lib\tarfile.py", line 1571, in __init__
self.firstmember = self.next()
File "C:\Python26\lib\tarfile.py", line 2317, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "C:\Python26\lib\tarfile.py", line 1235, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
File "C:\Python26\lib\gzip.py", line 219, in read
self._read(readsize)
File "C:\Python26\lib\gzip.py", line 271, in _read
uncompress = self.decompress.decompress(buf)
zlib.error: Error -3 while decompressing: invalid distance too far back
Try
f = open("tools.tar.gz", "wb")
It's crucial to tell Python that it's a binary file (in Py3, it also becomes crucial on Unixy systems, but in Py2 it's not strictly needed on them, which is why your code works on MacOSX): the default is text, which, on Windows, translates each \n written into \r\n on disk upon writing.