I am trying to decompress some MMS messages sent to me zipped. The problem is that sometimes it works, and others not. And when it doesnt work, the python zipfile module complains and says that it is a bad zip file. But the zipfile decompresses fine using the unix unzip command.
This is what ive got
zippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'w+')
zippedfile.write(string)
z = zipfile.ZipFile(zippedfile)
I am using 'w+' and writing a string to it, the string contains a base64 decoded string representation of a zip file.
Then I do like this:
filelist = z.infolist()
images = []
for f in filelist:
raw_mimetype = mimetypes.guess_type(f.filename)[0]
if raw_mimetype:
mimetype = raw_mimetype.split('/')[0]
else:
mimetype = 'unknown'
if mimetype == 'image':
images.append(f.filename)
This way I've got a list of all the images in the zip file. But this doesnt always work, since the zipfile module complains about some of the files.
Is there a way to do this, without using the zipfile module?
Could I somehow use the unix command unzip instead of zipfile and then to the same thing to retrive all the images from the archive?
You should very probably open the file in binary mode, when writing zipped data into it. That is, you should use
zippedfile = open('%stemp/tempfile.zip' % settings.MEDIA_ROOT, 'wb+')
You might have to close and reopen the file, or maybe seek to the start of the file after writing it.
filename = '%stemp/tempfile.zip' % settings.MEDIA_ROOT
zippedfile = open(filename , 'wb+')
zippedfile.write(string)
zippedfile.close()
z = zipfile.ZipFile(filename,"r")
You say the string is base64 decoded, but you haven't shown any code that decodes it - are you sure it's not still encoded?
data = string.decode('base64')
Related
I'm looking to edit a Minecraft Windows 10 level.dat file in python. I've tried using the package nbt and pyanvil but get the error OSError: Not a gzipped file. If I print open("level.dat", "rb").read() I get a lot of nonsensical data. It seems like it needs to be decoded somehow, but I don't know what decoding it needs. How can I open (and ideally edit) one of these files?
To read data just do :
from nbt import nbt
nbtfile = nbt.NBTFile("level.dat", 'rb')
print(nbtfile) # Here you should get a TAG_Compound('Data')
print(nbtfile["Data"].tag_info()) # Data came from the line above
for tag in nbtfile["Data"].tags: # This loop will show us each entry
print(tag.tag_info())
As for editing :
# Writing data (changing the difficulty value
nbtfile["Data"]["Difficulty"].value = 2
print(nbtfile["Data"]["Difficulty"].tag_info())
nbtfile.write_file("level.dat")
EDIT:
It looks like Mojang doesn't use the same formatting for Java and bedrock, as bedrock's level.dat file is stored in little endian format and uses non-compressed UTF-8.
As an alternative, Amulet-Nbt is supposed to be a Python library written in Cython for reading and editing NBT files (supposedly works with Bedrock too).
Nbtlib also seems to work, as long as you set byteorder="little when loading the file.
Let me know if u need more help...
You'll have to give the path either relative to the current working directory
path/to/file.dat
Or you can use the absolute path to the file
C:user/dir/path/to/file.dat
Read the data,replace the values and then write it
# Read in the file
with open('file.dat', 'r') as file :
filedata = file.read()
# Replace the target string
filedata = filedata.replace('yuor replacement or edit')
# Write the file out again
with open('file.dat', 'w') as file:
file.write(filedata)
So I've had this system that scrapes and compresses files for a while now using bz2 compression. The way it does so is using the following block of code I found on SO a few months back:
Let's assume for the purposes of this post the filename is always file.XXXX where XXXX is the relevant extension. We start with .txt
### How to compress a text file
filepath_compressed = "file.tar.bz2"
with open("file.txt", 'rb') as data:
tarbz2contents = bz2.compress(data.read(), 9)
with bz2.BZ2File(filepath_compressed, 'wb') as f_comp:
f_comp.write(tarbz2contents)
Now, to decompress it, I've always got it to work using a decompression software I have called Keka which decompresses the .tar.bz2 file to .tar, then I run it through Keka again to get an "extensionless" file which I then add a .txt to on my mac and then it works.
Now, to do decompress programmatically, I've tried a few things. I've tried the stuff from this post and the code from this post. I've tried using BZ2Decompressor and BZ2File and everything. I just seem to be missing something and I'm not sure what it is.
Here is what I have so far, and I'd like to know what is wrong with this code:
import bz2, tarfile, shutil
# Decompress to tar
with bz2.BZ2File("file.tar.bz2") as fr, open("file.tar", "wb") as fw:
shutil.copyfileobj(fr, fw)
# Decompress from tar to txt
with tarfile.open("file.tar", "r:") as tar:
tar.extractall("file_out.txt")
This code crashes because of a "tarfile.ReadError: truncated header" problem. I think the first context manager outputs a binary text file, and I tried decoding that but that failed too. What am i missing here i feel like a noob.
If you would like a minimum runnable piece of code to replicate this, add the following to make a dummy file:
lines = ["Line 1","Line 2", "Line 3"]
with open("file.txt", "w") as f:
for line in lines:
f.write(line+"\n")
The thing that you're making is not a .tar.bz2 file, but rather a .bz2.bz2 file. You are compressing twice with bzip2 (the second time with no effect), and there is no tar file generation anywhere to be seen.
I am trying doing a thing that goes through every file in a directory, but it crashes every time it meets a file that has an umlaute in the name. Like ä.txt
the shortened code:
import codecs
import os
for filename in os.listdir(WATCH_DIRECTORY):
with codecs.open(filename, 'rb', 'utf-8') as rawdata:
data = rawdata.readline()
# ...
And then I get this:
IOError: [Errno 2] No such file or directory: '\xc3\xa4.txt'
I've tried to encode/decode the filename variable with .encode('utf-8'), .decode('utf-8') and both combined. This usually leads to "ascii cannot decode blah blah"
I also tried unicode(filename) with and without encode/decode.
Soooo, kinda stuck here :)
You are opening a relative directory, you need to make them absolute.
This has nothing really to do with encodings; both Unicode strings and byte strings will work, especially when soured from os.listdir().
However, os.listdir() produces just the base filename, not a path, so add that back in:
for filename in os.listdir(WATCH_DIRECTORY):
fullpath = os.path.join(WATCH_DIRECTORY, filename)
with codecs.open(fullpath, 'rb', 'utf-8') as rawdata:
By the way, I recommend you use the io.open() function rather than codecs.open(). The io module is the new Python 3 I/O framework, and is a lot more robust than the older codecs module.
I'm having problems getting images to convert out of bytes/strings/etc. I can turn an image into a string, or a byte array, or use b64encode on it, but when I try decode/revert it back to an image, it never works. I've tried a lot of things, locally converting an image and then reconverting it, saving it under a different name. However, the resulting files will never actually show anything. (black on Linux, "can't display image" on windows)
My most basic b64encoding script is as follows:
import base64
def convert(image):
f = open(image)
data = f.read()
f.close()
string = base64.b64encode(data)
convertit = base64.b64decode(string)
t = open("Puppy2.jpg", "w+")
t.write(convertit)
t.close()
if __name__ == "__main__":
convert("Puppy.jpg")
I've been stuck on this for a while. I'm sure it's a simple solution, but being new to Python, it's been a bit difficult trying to sort things out.
If it helps with any insight, the end goal here is to transfer images over a network, possibly MQTT.
Any help is much appreciated. Thanks!
Edit** This is in Python 2.7.
Edit 2** Wow, you guys move fast. What a great intro to the community - thanks a lot for the quick responses and super fast results!
For python3, you need to open and write in binary mode:
def convert(image):
f = open(image,"rb")
data = f.read()
f.close()
string = base64.b64encode(data)
convert = base64.b64decode(string)
t = open("Puppy2.jpg", "wb")
t.write(convert)
t.close()
Using python 2 on linux, simply r and w should work fine. On windows you need to do the same as above.
from the docs:
On Windows, 'b' appended to the mode opens the file in binary mode, so there are also modes like 'rb', 'wb', and 'r+b'. Python on Windows makes a distinction between text and binary files; the end-of-line characters in text files are automatically altered slightly when data is read or written. This behind-the-scenes modification to file data is fine for ASCII text files, but it’ll corrupt binary data like that in JPEG or EXE files. Be very careful to use binary mode when reading and writing such files. On Unix, it doesn’t hurt to append a 'b' to the mode, so you can use it platform-independently for all binary files.
You can also write your code a little more succinctly by using with to open your files which will automatically close them for you:
from base64 import b64encode, b64decode
def convert(image):
with open(image, "rb") as f, open("Puppy2.jpg", "wb") as t:
conv = b64decode(b64encode(f.read()))
t.write(conv)
import base64
def convert(image):
f = open(image)
data = f.read()
f.close()
return data
if __name__ == "__main__":
data = convert("Puppy2.jpg")
string = base64.b64encode(data)
convert = base64.b64decode(string)
t = open("Puppy2.jpg", "w+")
t.write(convert)
t.close()
I'm trying to use the Python GZIP module to simply uncompress several .gz files in a directory. Note that I do not want to read the files, only uncompress them. After searching this site for a while, I have this code segment, but it does not work:
import gzip
import glob
import os
for file in glob.glob(PATH_TO_FILE + "/*.gz"):
#print file
if os.path.isdir(file) == False:
shutil.copy(file, FILE_DIR)
# uncompress the file
inF = gzip.open(file, 'rb')
s = inF.read()
inF.close()
the .gz files are in the correct location, and I can print the full path + filename with the print command, but the GZIP module isn't getting executed properly. what am I missing?
If you get no error, the gzip module probably is being executed properly, and the file is already getting decompressed.
The precise definition of "decompressed" varies on context:
I do not want to read the files, only uncompress them
The gzip module doesn't work as a desktop archiving program like 7-zip - you can't "uncompress" a file without "reading" it. Note that "reading" (in programming) usually just means "storing (temporarily) in the computer RAM", not "opening the file in the GUI".
What you probably mean by "uncompress" (as in a desktop archiving program) is more precisely described (in programming) as "read a in-memory stream/buffer from a compressed file, and write it to a new file (and possibly delete the compressed file afterwards)"
inF = gzip.open(file, 'rb')
s = inF.read()
inF.close()
With these lines, you're just reading the stream. If you expect a new "uncompressed" file to be created, you just need to write the buffer to a new file:
with open(out_filename, 'wb') as out_file:
out_file.write(s)
If you're dealing with very large files (larger than the amount of your RAM), you'll need to adopt a different approach. But that is the topic for another question.
You're decompressing file into s variable, and do nothing with it. You should stop searching stackoverflow and read at least python tutorial. Seriously.
Anyway, there's several thing wrong with your code:
you need is to STORE the unzipped data in s into some file.
there's no need to copy the actual *.gz files. Because in your code, you're unpacking the original gzip file and not the copy.
you're using file, which is a reserved word, as a variable. This is not
an error, just a very bad practice.
This should probably do what you wanted:
import gzip
import glob
import os
import os.path
for gzip_path in glob.glob(PATH_TO_FILE + "/*.gz"):
if os.path.isdir(gzip_path) == False:
inF = gzip.open(gzip_path, 'rb')
# uncompress the gzip_path INTO THE 's' variable
s = inF.read()
inF.close()
# get gzip filename (without directories)
gzip_fname = os.path.basename(gzip_path)
# get original filename (remove 3 characters from the end: ".gz")
fname = gzip_fname[:-3]
uncompressed_path = os.path.join(FILE_DIR, fname)
# store uncompressed file data from 's' variable
open(uncompressed_path, 'w').write(s)
You should use with to open files and, of course, store the result of reading the compressed file. See gzip documentation:
import gzip
import glob
import os
import os.path
for gzip_path in glob.glob("%s/*.gz" % PATH_TO_FILE):
if not os.path.isdir(gzip_path):
with gzip.open(gzip_path, 'rb') as in_file:
s = in_file.read()
# Now store the uncompressed data
path_to_store = gzip_fname[:-3] # remove the '.gz' from the filename
# store uncompressed file data from 's' variable
with open(path_to_store, 'w') as f:
f.write(s)
Depending on what exactly you want to do, you might want to have a look at tarfile and its 'r:gz' option for opening files.
I was able to resolve this issue by using the subprocess module:
for file in glob.glob(PATH_TO_FILE + "/*.gz"):
if os.path.isdir(file) == False:
shutil.copy(file, FILE_DIR)
# uncompress the file
subprocess.call(["gunzip", FILE_DIR + "/" + os.path.basename(file)])
Since my goal was to simply uncompress the archive, the above code accomplishes this. The archived files are located in a central location, and are copied to a working area, uncompressed, and used in a test case. the GZIP module was too complicated for what I was trying to accomplish.
Thanks for everyone's help. It is much appreciated!
I think there is a much simpler solution than the others presented given the op only wanted to extract all the files in a directory:
import glob
from setuptools import archive_util
for fn in glob.glob('*.gz'):
archive_util.unpack_archive(fn, '.')