Assemble mpeg file unable to play in mediaplayer - python

I'm currently working on a school project and i seem to encouter some problems with an MPEG file. The scope of my project is to:
1) split MPEG file into many fixed size chunk.
2) assemble some of them while omitting certain chunk.
Problem 1:
When i play the file in media player, it will play the video until it reaches the chunk that i omit.
Example:
chunk = ["yui_1", "yui_2", "yui_3", "yui_5", "yui_6"]
Duration of each chunk: 1 second
*If you realise i have omitted "yui_4" chunk.*
If I were to assemble all the chunk except "yui_4", the video will play first 2 seconds before it hangs throughout the duration.
Problem 2:
When i assemble the chunk while omitting the first chunk, it render the entire mpeg file unable to play.
Example:
chunk = ["yui_2", "yui_3", "yui_4", "yui_5", "yui_6"]
Duration of each chunk: 1 second
Below is a portion of my code (hardcode):
def splitFile(inputFile,chunkSize):
splittext = string.split(filename, ".")
name = splittext[0]
extension = splittext[1]
os.chdir("./" + media_dir)
#read the contents of the file
f = open(inputFile, 'rb')
data = f.read() # read the entire content of the file
f.close()
# get the length of data, ie size of the input file in bytes
bytes = len(data)
#calculate the number of chunks to be created
noOfChunks= bytes/chunkSize
if(bytes%chunkSize):
noOfChunks+=1
#create a info.txt file for writing metadata
f = open('info.txt', 'w')
f.write(inputFile+','+'chunk,'+str(noOfChunks)+','+str(chunkSize))
f.close()
chunkNames = []
count = 1
for i in range(0, bytes+1, chunkSize):
fn1 = name + "_%s" % count
chunkNames.append(fn1)
f = open(fn1, 'wb')
f.write(data[i:i+ chunkSize])
count += 1
f.close()
Below is a portion of how i assemble the chunk:
def assemble():
data = ["yui_1", "yui_2", "yui_3", "yui_4", "yui_5", "yui_6", "yui_7"]
output = open("output.mpeg", "wb")
for item in datafile:
data = open(item, "rb"). read()
output.write(data)
output.close()

MPEG video files contain "encoded video data" which means it is compressed. The bottom line is that cutting up the video into chunks that can be appended or played separately is not trivial. Both your problems will not be solved unless you read the MPEG2 transport stream specification carefully and understand where to find points which you can cut and "splice" and still output a compliant MPEG stream. My guess is that this isn't what you want to do for a school project.
Maybe you should try to read how to use FFMPEG (http://www.ffmpeg.org/) to cut and append video files.
Good luck on your project.

Related

How to read chunk from middle of a long csv file using Python (200 GB+)

I have a large csv file and I am reading it with chunks. In the middle of the process memory got full so I want to restart from where it left. I know which chunk but don't know how to go to that chunk directly.
This is what I tried.
# data is the txt file
reader = pd.read_csv(data ,
delimiter = "\t",
chunksize = 1000
)
# Please see the code below. When my last process broke, i was 154 so I think it should
# start from 154000th line. This time I don't
# plan to read whole file at once so I have an
# end point at 160000
first = 154*1000
last = 160*1000
output_path = 'usa_hotspot_data_' + str(first) + '_' + str(last) + '.csv'
print("Output file: ", output_path)
try:
os.remove(output_path)
except OSError:
pass
# Read chunks and save to a new csv
for i,chunk in enumerate(reader):
if (i >= first and i<=last) :
< -- here I do something -- >
# Progress Bar to keep track
if (i% 1000 == 0):
print("#", end ='')
However, this is taking a lot of time to reach the ith line I want to go. How can I skip reading chunks before it and directly go there?
pandas.read_csv
skiprows: Line numbers to skip (0-indexed) or number of lines to skip
(int) at the start of the file.
You can pass this skiprows to read_csv, It will act like offset.

Speed up reading in a compressed bz2 file ('rb' mode)

I have a BZ2 file of more than 10GB. I'd like to read it without decompressing it into a temporary file (it would be more than 50GB).
With this method:
import bz2, time
t0 = time.time()
time.sleep(0.001) # to avoid / by 0
with bz2.open("F:\test.bz2", 'rb') as f:
for i, l in enumerate(f):
if i % 100000 == 0:
print('%i lines/sec' % (i/(time.time() - t0)))
I can only read ~ 250k lines per second. On a similar file, first decompressed, I get ~ 3M lines per second, i.e. a x10 factor:
with open("F:\test.txt", 'rb') as f:
I think it's not only due to the intrinsic decompression CPU time (because the total time of decompression into a temp file + the reading as uncompressed file is much smaller than the method described here), but maybe a lack of buffering, or other reasons. Are there other faster Python implementations of bz2.open?
How to speed up the reading of a BZ2 file, in binary mode, and loop over "lines"? (separated by \n)
Note: currently time to decompress test.bz2 into test.tmp + time to iterate over lines of test.tmp is far smaller than time to iterate over lines of bz2.open('test.bz2'), and this probably should not be the case.
Linked topic: https://discuss.python.org/t/non-optimal-bz2-reading-speed/6869
You can use BZ2Decompressor to deal with huge files. It decompresses blocks of data incrementally, just out of the box:
t0 = time.time()
time.sleep(0.000001)
with open('temp.bz2', 'rb') as fi:
decomp = bz2.BZ2Decompressor()
residue = b''
total_lines = 0
for data in iter(lambda: fi.read(100 * 1024), b''):
raw = residue + decomp.decompress(data) # process the raw data and concatenate residual of the previous block to the beginning of the current raw data block
residue = b''
# process_data(current_block) => do the processing of the current data block
current_block = raw.split(b'\n')
if raw[-1] != b'\n':
residue = current_block.pop() # last line could be incomplete
total_lines += len(current_block)
print('%i lines/sec' % (total_lines / (time.time() - t0)))
# process_data(residue) => now finish processing the last line
total_lines += 1
print('Final: %i lines/sec' % (total_lines / (time.time() - t0)))
Here I read a chunk of binary file, feed it into a decompressor and receive a chunk of decompressed data. Be aware, the decompressed data chunks have to be concatenated to restore the original data. This is why last entry needs special treatment.
In my experiments it runs a little faster then your solution with io.BytesIO(). bz2 is known to be slow, so if it bothers you consider migration to snappy or zstandard.
Regarding the time it takes to process bz2 in Python. It might be fastest to decompress the file into temporary one using Linux utility and then process a normal text file. Otherwise you will be dependent on Python's implementation of bz2.
This method already gives a x2 improvement over native bz2.open.
import bz2, time, io
def chunked_readlines(f):
s = io.BytesIO()
while True:
buf = f.read(1024*1024)
if not buf:
return s.getvalue()
s.write(buf)
s.seek(0)
L = s.readlines()
yield from L[:-1]
s = io.BytesIO()
s.write(L[-1]) # very important: the last line read in the 1 MB chunk might be
# incomplete, so we keep it to be processed in the next iteration
# TODO: check if this is ok if f.read() stopped in the middle of a \r\n?
t0 = time.time()
i = 0
with bz2.open("D:\test.bz2", 'rb') as f:
for l in chunked_readlines(f): # 500k lines per second
# for l in f: # 250k lines per second
i += 1
if i % 100000 == 0:
print('%i lines/sec' % (i/(time.time() - t0)))
It is probably possible to do even better.
We could have a x4 improvement if we could use s as a a simple bytes object instead of a io.BytesIO. But unfortunately in this case, splitlines() does not behave as expected: splitlines() and iterating over an opened file give different results.

How to read from one file and write to several other files

I have a file containing several images. The images are chopped up in packets, I called packet chunk in my code example. Every chunk contains a header with: count, uniqueID, start, length. Start contains the start index of the img_data within the chunk and length is the length of the img_data within the chunk. Count runs from 0 to 255 and the img_data of all these 256 chunks combined forms one image. Before reading the chunks I open a 'dummy.bin' file to have something to write to, otherwise I get that f is not defined. At the end I remove the 'dummy.bin' file. The problem is that I need a file reference to start with. Although this code works I wonder if there is another way then creating a dummy-file to get a file reference. The first chunk in 'test_file.bin' has hdr['count'] == 0 so f.close() will be called in the first iteration. That is why I need to have a file reference f before entering the for loop. Apart from that, every iteration I write img_data to a file with f.write(img_data), here I also need a file reference that needs to be defined prior to entering the for loop, in case the first chunk has hdr['count'] != 0. Is this the best solution? how do you generally read from a file and create several other files from it?
# read file, write several other files
import os
def read_chunks(filename, chunksize = 512):
f = open(filename, 'rb')
while True:
chunk = f.read(chunksize)
if chunk:
yield chunk
else:
break
def parse_header(data):
count = data[0]
uniqueID = data[1]
start = data[2]
length = data[3]
return {'count': count, 'uniqueID': uniqueID, 'start': start, 'length': length}
filename = 'test_file.bin'
f = open('dummy.bin', 'wb')
for chunk in read_chunks(filename):
hdr = parse_header(chunk)
if hdr['count'] == 0:
f.close()
img_filename = 'img_' + str(hdr['uniqueID']) + '.raw'
f = open(img_filename, 'wb')
img_data = chunk[hdr['start']: hdr['start'] + hdr['length']]
f.write(img_data)
print(type(f))
f.close()
os.remove('dummy.bin')

Python JPEG file carver takes incredibly long on small dumps

I'm writing a JPEG file carver as a part of a forensic lab.
The assignment is to write a script that can extract JPEG files from a 10 MB dd-dump. We are not permitted to assign the file to a memory variable (because if it were to be too big, it would cause an overflow), but instead, the Python script should read directly from the file.
My script seems to work perfectly fine, but it takes extremely long to finish (upwards 30-40 minutes). Is this an expected behavior? Even for such a small 10 MB file? Is there anything I can do to shorten the time?
This is my code:
# Extract JPEGs from a file.
import sys
with open(sys.argv[1], "rb") as binary_file:
binary_file.seek(0, 2) # Seek the end
num_bytes = binary_file.tell() # Get the file size
count = 0 # Our counter of which file we are currently extracting.
for i in range(num_bytes):
binary_file.seek(i)
four_bytes = binary_file.read(4)
whole_file = binary_file.read()
if four_bytes == b"\xff\xd8\xff\xd8" or four_bytes == b"\xff\xd8\xff\xe0" or four_bytes == b"\xff\xd8\xff\xe1": # JPEG signature
whole_file = whole_file.split(four_bytes)
for photo in whole_file:
count += 1
name = "Pic " + str(count) + ".jpg"
file(name, "wb").write(four_bytes+photo)
print name
Aren't you reading your whole file on every for loop?
E: What I mean, is at every byte you read your whole file (for a 10MB file you are reading 10MB 10 million times, aren't you?), even if the four bytes didn't match up to JPEG signature.
E3 : What you need is on every byte to check if there is file to be written (checking for the header/signature). If you match the signature, you have to start writing bytes to file, but first, since you already read 4 bytes, you have to jump back where you are. Then, when reading the byte and writing it to file, you have to check for JPEG ending. If the file ends, you have to write the next byte and close the stream and start searching for header again. This will not extract a JPEG from inside another JPEG.
import sys
with open("C:\\Users\\rauno\\Downloads\\8-jpeg-search\\8-jpeg-search.dd", "rb") as binary_file:
binary_file.seek(0, 2) # Seek the end
num_bytes = binary_file.tell() # Get the file size
write_to_file = False
count = 0 # Our counter of which file we are currently extracting.
for i in range(num_bytes):
binary_file.seek(i)
if write_to_file is False:
four_bytes = binary_file.read(4)
if four_bytes == b"\xff\xd8\xff\xd8" or four_bytes == b"\xff\xd8\xff\xe0" or four_bytes == b"\xff\xd8\xff\xe1": # JPEG signature
write_to_file = True
count += 1
name = "Pic " + str(count) + ".jpg"
f = open(name,"wb")
binary_file.seek(i)
if write_to_file is True: #not 'else' or you miss the first byte
this_byte = binary_file.read(1)
f.write(this_byte)
next_byte = binary_file.read(1) # assuming it does read next byte - i think "read" jumps the seeker (which is why you have .seek(i) at the beginning)
if this_byte == b"\xff" and next_byte==b"\xd9" :
f.write(next_byte)
f.close()
write_to_file = False

Splitting a CSV file into equal parts?

I have a large CSV file that I would like to split into a number that is equal to the number of CPU cores in the system. I want to then use multiprocess to have all the cores work on the file together. However, I am having trouble even splitting the file into parts. I've looked all over google and I found some sample code that appears to do what I want. Here is what I have so far:
def split(infilename, num_cpus=multiprocessing.cpu_count()):
READ_BUFFER = 2**13
total_file_size = os.path.getsize(infilename)
print total_file_size
files = list()
with open(infilename, 'rb') as infile:
for i in xrange(num_cpus):
files.append(tempfile.TemporaryFile())
this_file_size = 0
while this_file_size < 1.0 * total_file_size / num_cpus:
files[-1].write(infile.read(READ_BUFFER))
this_file_size += READ_BUFFER
files[-1].write(infile.readline()) # get the possible remainder
files[-1].seek(0, 0)
return files
files = split("sample_simple.csv")
print len(files)
for ifile in files:
reader = csv.reader(ifile)
for row in reader:
print row
The two prints show the correct file size and that it was split into 4 pieces (my system has 4 CPU cores).
However, the last section of the code that prints all the rows in each of the pieces gives the error:
for row in reader:
_csv.Error: line contains NULL byte
I tried printing the rows without running the split function and it prints all the values correctly. I suspect the split function has added some NULL bytes to the resulting 4 file pieces but I'm not sure why.
Does anyone know if this a correct and fast method to split the file? I just want resulting pieces that can be read successfully by csv.reader.
As I said in a comment, csv files would need to be split on row (or line) boundaries. Your code doesn't do this and potentially breaks them up somewhere in the middle of one — which I suspect is the cause of your _csv.Error.
The following avoids doing that by processing the input file as a series of lines. I've tested it and it seems to work standalone in the sense that it divided the sample file up into approximately equally size chunks because it's unlikely that an whole number of rows will fit exactly into a chunk.
Update
This it is a substantially faster version of the code than I originally posted. The improvement is because it now uses the temp file's own tell() method to determine the constantly changing length of the file as it's being written instead of calling os.path.getsize(), which eliminated the need to flush() the file and call os.fsync() on it after each row is written.
import csv
import multiprocessing
import os
import tempfile
def split(infilename, num_chunks=multiprocessing.cpu_count()):
READ_BUFFER = 2**13
in_file_size = os.path.getsize(infilename)
print 'in_file_size:', in_file_size
chunk_size = in_file_size // num_chunks
print 'target chunk_size:', chunk_size
files = []
with open(infilename, 'rb', READ_BUFFER) as infile:
for _ in xrange(num_chunks):
temp_file = tempfile.TemporaryFile()
while temp_file.tell() < chunk_size:
try:
temp_file.write(infile.next())
except StopIteration: # end of infile
break
temp_file.seek(0) # rewind
files.append(temp_file)
return files
files = split("sample_simple.csv", num_chunks=4)
print 'number of files created: {}'.format(len(files))
for i, ifile in enumerate(files, start=1):
print 'size of temp file {}: {}'.format(i, os.path.getsize(ifile.name))
print 'contents of file {}:'.format(i)
reader = csv.reader(ifile)
for row in reader:
print row
print ''

Categories

Resources