How to compress a processed text file in Python? - python

I have a text file which I constantly append data to. When processing is done I need to gzip the file. I tried several options like shutil.make_archive, tarfile, gzip but could not eventually do it. Is there no simple way to compress a file without actually writing to it?
Let's say I have mydata.txt file and I want it to be gzipped and saved as mydata.txt.gz.

I don't see the problem. You should be able to use e.g. the gzip module just fine, something like this:
inf = open("mydata.txt", "rb")
outf = gzip.open("file.txt.gz", "wb")
outf.write(inf.read())
outf.close()
inf.close()
There's no problem with the file being overwritten, the name given to gzip.open() is completely independent of the name given to plain open().

If you want to compress a file without writing to it, you could run a shell command such as gzip using the Python libraries subprocess or popen or os.system.

Related

How to *properly* compress and decompress a text file using bz2 and python

So I've had this system that scrapes and compresses files for a while now using bz2 compression. The way it does so is using the following block of code I found on SO a few months back:
Let's assume for the purposes of this post the filename is always file.XXXX where XXXX is the relevant extension. We start with .txt
### How to compress a text file
filepath_compressed = "file.tar.bz2"
with open("file.txt", 'rb') as data:
tarbz2contents = bz2.compress(data.read(), 9)
with bz2.BZ2File(filepath_compressed, 'wb') as f_comp:
f_comp.write(tarbz2contents)
Now, to decompress it, I've always got it to work using a decompression software I have called Keka which decompresses the .tar.bz2 file to .tar, then I run it through Keka again to get an "extensionless" file which I then add a .txt to on my mac and then it works.
Now, to do decompress programmatically, I've tried a few things. I've tried the stuff from this post and the code from this post. I've tried using BZ2Decompressor and BZ2File and everything. I just seem to be missing something and I'm not sure what it is.
Here is what I have so far, and I'd like to know what is wrong with this code:
import bz2, tarfile, shutil
# Decompress to tar
with bz2.BZ2File("file.tar.bz2") as fr, open("file.tar", "wb") as fw:
shutil.copyfileobj(fr, fw)
# Decompress from tar to txt
with tarfile.open("file.tar", "r:") as tar:
tar.extractall("file_out.txt")
This code crashes because of a "tarfile.ReadError: truncated header" problem. I think the first context manager outputs a binary text file, and I tried decoding that but that failed too. What am i missing here i feel like a noob.
If you would like a minimum runnable piece of code to replicate this, add the following to make a dummy file:
lines = ["Line 1","Line 2", "Line 3"]
with open("file.txt", "w") as f:
for line in lines:
f.write(line+"\n")
The thing that you're making is not a .tar.bz2 file, but rather a .bz2.bz2 file. You are compressing twice with bzip2 (the second time with no effect), and there is no tar file generation anywhere to be seen.

Error when using gzip on a file containing line breaks

I'm attempting to use python's gzip library to streamline some python scripts that create csv output files. I've tried a number of different methods of creating the gzip file, but no matter which method I've tried, I'm running into the same issue.
My python script runs successfully, but when I try to decompress the gzip file in Finder (using MacOS 10.15.6), I'm prompted with the following error:
Unable to expand "file.csv.gz" into "Documents". (Error 79 - Inappropriate file type or format.)
After some debugging, I've narrowed down the cause of the error to the file content containing line break (\n) characters.
This simple example code triggers the above error on gzip expansion:
import gzip
content = b'Id,Food\n1,Spam\n2,Eggs\n'
f = gzip.open('file.csv.gz', 'wb')
f.write(content)
f.close()
When I remove all \n characters from the content variable, everything works fine:
import gzip
content = b'Id,Food,1,Spam,2,Eggs'
f = gzip.open('file.csv.gz', 'wb')
f.write(content)
f.close()
Does gzip want me to use a different line break mechanism? I'm sure I'm missing some sort of foundational knowledge about gzip or binaries, so any info that helps get me back on track would be much appreciated.
It has nothing to do with Python's gzip. It is, arguably, a bug in macOS where it sometimes detects the resulting uncompressed data as an mtree by the Archive Utility, but then finds the uncompressed data violates the mtree format.
The solution is to not double-click to decompress. Use gzip to decompress.

Python Subprocess for Notepad

I am trying to open Notepad using popen and write something into it. I can't get my head around it. I can open Notepad using command:
notepadprocess=subprocess.Popen('notepad.exe')
I am trying to identify how can I write anything in the text file using python. Any help is appreciated.
You can at first write something into txt file (ex. foo.txt) and then open it with notepad:
import os
f = open('foo.txt','w')
f.write('Hello world!')
f.close()
os.system("notepad.exe foo.txt")
You may be confusing the concept of (text) file with the processes that manipulate them.
Notepad is a program, of which you can create a process. A file, on the other hand, is just a structure on your hard drive.
From a programming standpoint, Notepad doesn't edit files. It:
reads a file into computer memory
modifies the content of that memory
writes that memory back into a file (which could be similarly named, or otherwise - which is known as the "Save as" operation).
Your program, just as any other program, can manipulate files, just as notepad does. In particular, you can perform exactly the same sequence as Notepad:
my_file= "myfile.txt" #the name/path of the file
with open(file, "rb") as f: #open the file for reading
content= f.read() #read the file into memory
content+= "mytext" #change the memory
with open(file, "wb") as f: #open the file for writing
f.write( content ) #write the memory into the file
Found the exact solution from Alex K's comment. I used pywinauto to perform this task.

Opening And Reading Large Numbers of Files in Python

I have 37 data files that I need to open and analyze using python. Rather than brute force my code with a lot of open() and close() statements, is there a concise way to open and read from a large number of files?
You are going to have to open and close a file handle for each file you are hoping to read from. What is your aversion to doing it this way?
Are you looking for perhaps good way to determine which files need to be read?
Use a dictionary of filenames to file handles and then iterate over the items. Or a list of tuples. Or two-dimensional arrays. Or or or ...
Use the standard library fileinput module
Pass in the data files on the command line and process like this
import fileinput
for line in fileinput.input():
process(line)
This iterates over all the lines of all the files passed in on the command line. This module also provides helper functions to let you know which file and line you are on currently.
Use the arcane functionality known as a function.
def slurp(filename):
"""slurp will cleanly read in a file's contents, cleaning up after itself"""
# Using the 'with' statement will automagically close
# the file handle when you're done.
with open(filename, "r") as fh:
# if the files are too big to keep in-memory, then read by chunks
# instead and process the data into smaller data structures as needed.
return fh.read()
data = [ slurp(filename) for filename in ["data1.dat", "data2.dat", "data3.dat"]]
You can also combine the entire thing:
for filename in ["a.dat", "b.dat", "c.dat"]:
with open(filename,"r") as fh:
for line in fh:
process_line(line)
And so on...

simple python file writing question

I'm learning Python, and have run into a bit of a problem. On my OSX install of Python 3.1, this happens in the console:
>>> filename = "test"
>>> reader = open(filename, 'r')
>>> writer = open(filename, 'w')
>>> reader.read()
''
>>> writer.write("hello world\n")
12
>>> reader.read()
''
And calling more test in BASH confirms that there is nothing in test. What's going on?
Thanks.
There are two potential reasons why you are seeing this behaviour.
When you open a file for writing (with the "w" open mode in Python), the OS removes the original file and creates a totally new one. So by opening the file for reading first and then writing, the original reading handle refers to a file that no longer has a name (the file still exists until you close it). At that point you're reading from a different file than you're writing to.
After you swap the order of opening so you open for writing and then reading, you won't necessarily be able to read the data from the file until you flush it:
>>> writer.flush()
>>> reader.read()
'hello world\n'
Flushing the file writes any data that might be in Python's file buffers to the OS, so that when you read from the file from the other handle, the OS will return the data. Note that Python itself doesn't know these two handles refer to the same file, but the OS does.
You're probably trashing your file. It's not usually a good idea to open a file for reading and writing at the same time.
Buffering. If you really want to read and write to the same file open one handle using "w+".
And with the buttering, you will need to force the buffer to be emptied before reading. Closing the file is a good way to do this.

Categories

Resources