Read specific bytes of file in python - python

I want to specify an offset and then read the bytes of a file like
offset = 5
read(5)
and then read the next 6-10 etc. I read about seek but I cannot understand how it works and the examples arent descriptive enough.
seek(offset,1) returns what?
Thanks

The values for the second parameter of seek are 0, 1, or 2:
0 - offset is relative to start of file
1 - offset is relative to current position
2 - offset is relative to end of file
Remember you can check out the help -
>>> help(file.seek)
Help on method_descriptor:
seek(...)
seek(offset[, whence]) -> None. Move to new file position.
Argument offset is a byte count. Optional argument whence defaults to
0 (offset from start of file, offset should be >= 0); other values are 1
(move relative to current position, positive or negative), and 2 (move
relative to end of file, usually negative, although many platforms allow
seeking beyond the end of a file). If the file is opened in text mode,
only offsets returned by tell() are legal. Use of other offsets causes
undefined behavior.
Note that not all file objects are seekable.

Just play with Python's REPL to see for yourself:
[...]:/tmp$ cat hello.txt
hello world
[...]:/tmp$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('hello.txt', 'rb')
>>> f.seek(6, 1) # move the file pointer forward 6 bytes (i.e. to the 'w')
>>> f.read() # read the rest of the file from the current file pointer
'world\n'

seek doesn't return anything useful. It simply moves the internal file pointer to the given offset. The next read will start reading from that pointer onwards.

Related

Getting csv.Sniffer to work with quoted values

I'm trying to use python's CSV sniffer tool as suggested in many StackOverflow answers to guess if a given CSV file is delimited by ; or ,.
It's working fine with basic files, but when a value contains a delimiter, it is surrounded by double quotes (as the standard goes), and the sniffer throws _csv.Error: Could not determine delimiter.
Has anyone experienced that before?
Here is a minimal failing CSV file:
column1,column2
0,"a, b"
And the proof of concept:
Python 3.5.1 (default, Dec 7 2015, 12:58:09)
[GCC 5.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import csv
>>> f = open("example.csv", "r")
>>> f.seek(0);
0
>>> csv.Sniffer().sniff(f.read(), delimiters=';,')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/csv.py", line 186, in sniff
raise Error("Could not determine delimiter")
_csv.Error: Could not determine delimiter
I have total control over the generation of input CSV file; but sometimes it is modified by a third party using MS Office and the delimiter is replaced by semicolumns, so I have to use this guessing approach.
I know I could stop using commas in the input file, but I would like to know if I'm doing something wrong first.
You are giving the sniffer too much input. Your sample file does work if you run:
csv.Sniffer().sniff(f.readline())
which uses only the header row to determine the delimiter character. If you want to understand why the Sniffer heuristics fail for more data, there is no substitute for reading the csv.py library source code.

What does `if file.find('freq-') != -1` mean?

I'm a chemistry student and want to write a script to extract some data (like coupling constants and interproton distance) from gaussian output files.
I found a script which extracts chemical shifts from gaussian output files. However, I don't understand what does if file.find('freq-') !=-1 mean in the script.
Here's part of the script (since the script also does other things as well so I've just sown the bit relevant to my question):
def read_gaussian_freq_outfiles(list_of_files):
list_of_freq_outfiles = []
for file in list_of_files:
if file.find('freq-') !=-1:
list_of_freq_outfiles.append([file,int(get_conf_number(file)),open(file,"r").readlines()])
return list_of_freq_outfiles
def read_gaussian_outputfiles():
list_of_files = []
for file in glob.glob('*.out'):
list_of_files.append(file)
return list_of_files
I think in the def read_gaussian_outputfiles() bit, we create a list of file and simply add all file with extension '.out' to the list.
The read_gaussian_freq_outfiles(list_of_files) bit has probably list files which has "freq-" in the file name. But what does the file.find('freq-')!=-1 mean?
Does it mean if whatever we find in the file name doesn't equal to -1, or something else?
Some other additional information: the format of the gaussian output filename is: xxxx-opt_freq-conf-yyyy.out where xxxx is the name of your molecule and yyyy is a number.
When s.find(foo) fails to find foo in s, it returns -1. Therefore, when s.find(foo) does not return -1, we know it didn't fail.
read_gaussian_freq_outfiles looks for the term "freq-" in each of the names of files in list_of_files. If it succeeds in finding this phrase in the name of a file, it appends a list containing this file, a "conf number" (not sure what this is), and the contents of the file, to a list called list_of_freq_outfiles.
I created three files, goodbye.txt, hello.txt, and helloworld.txt to demonstrate usage.
In this example, I'll print all files that end with .txt, create a list of files, then print all files that have the phrase "goodbye" in the filename. This should only print goodbye.txt.
09:53 $ ls
goodbye.txt hello.txt helloworld.txt
(venv) ✔ ~/Desktop/ex
09:53 $ python
Python 2.7.11 (default, Dec 5 2015, 14:44:47)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import glob
>>> for file in glob.glob('*.txt'):
... print(file)
...
goodbye.txt
hello.txt
helloworld.txt
>>> list_of_files = [ file for file in glob.glob('*.txt') ]
>>> print(list_of_files)
['goodbye.txt', 'hello.txt', 'helloworld.txt']
>>> for file in list_of_files:
... if file.find('goodbye') != -1:
... print(file)
...
goodbye.txt
Indeed, goodbye.txt is the only file printed.
As the other answers also show: if .find() retrieves -1, it cannot find what you're looking for. This has to do with the fact that .find will return the first index at which it can find your query. So in the following sentence
The cat is on the mat
and sentence.find('cat'), it will return 4 (since 'cat' starts at index 4 (it starts at 0!)).
However, sentence.find('dog') will return the only thing it can return if it cannot find it: -1. If it returned 0 as the 'cannot find', you might think your query starts at index 0. With -1, you know it could not find it.
String find method in python looks at the occurrence of a sub-string in a given string (ref http://www.tutorialspoint.com/python/string_find.htm)
Here it is looking for all the filenames with 'freq-' sub-string in them.

Python hashlib MD5 digest of any UNC file always yields same hash

The below code shows that three files which are on a UNC share hosted on another machine have the same hash. It also shows that local files have different hashes. Why would this be? I feel that there is some UNC consideration that I don't know about.
Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> fn_a = '\\\\some.host.com\\Shares\\folder1\\file_a'
>>> fn_b = '\\\\some.host.com\\Shares\\folder1\\file_b'
>>> fn_c = '\\\\some.host.com\\Shares\\folder2\\file_c'
>>> fn_d = 'E:\\file_d'
>>> fn_e = 'E:\\file_e'
>>> fn_f = 'E:\\folder3\\file_f'
>>> f_a = open(fn_a, 'r')
>>> f_b = open(fn_b, 'r')
>>> f_c = open(fn_c, 'r')
>>> f_d = open(fn_d, 'r')
>>> f_e = open(fn_e, 'r')
>>> f_f = open(fn_f, 'r')
>>> hashlib.md5(f_a.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_b.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_c.read()).hexdigest()
'54637fdcade4b7fd7cabd45d51ab8311'
>>> hashlib.md5(f_d.read()).hexdigest()
'd2bf541b1a9d2fc1a985f65590476856'
>>> hashlib.md5(f_e.read()).hexdigest()
'e84be3c598a098f1af9f2a9d6f806ed5'
>>> hashlib.md5(f_f.read()).hexdigest()
'e11f04ed3534cc4784df3875defa0236'
EDIT: To further investigate the problem, I also tested using a file from another host. It appears that changing the host will change the result.
>>> fn_h = '\\\\host\\share\\file'
>>> f_h = open(fn_h, 'r')
>>> hashlib.md5(f_h.read()).hexdigest()
'f23ee2dbbb0040bf2586cfab29a03634'
...but then I tried a different file on the new host, and got a new result!
>>> fn_i = '\\\\host\\share\\different_file'
>>> f_i = open(fn_i, 'r')
>>> hashlib.md5(f_i.read()).hexdigest()
'a8ad771db7af8c96f635bcda8fdce961'
So, now I'm really confused. Could it have something to do with the fact that the original host is a \\host.com format and the new host is a \\host format?
I did some additional research based on the comments and answers everyone provided. I decided I needed to study permutations of these two features of the code:
A raw string literal is used for the path name, i.e. whether or not:
A. The file path string is raw with single backslashes in the path, vs.
B. The file path string is not raw with double backslashes in the path
(FYI to those who don't know, a raw string is one which is proceeded by an "r" like this: r'This is a raw string')
The open function mode is r or rb.
(FYI again to those who don't know, the b in rb mode indicates to read the file as binary.)
The results demonstrated:
The string literal / backslashes make no difference in whether or not the hashes of different files are different
My error was not opening the file in binary mode. When using rb mode in open, I got different results.
Yay! And thanks for the help.
Use f1.seek(0) if you intend to use it again, otherwise it would be a file completely read and calling read() again would just return a empty string.
I don't reproduce your problem. I'm using Python 3.4 on Windows 7 here with the following test script which accesses files on a network hard disk:
import sys, hashlib
def main():
fn0 = r'\\NAS\Public\Software\Backup\Test\Vagrantfile'
fn1 = r'\\NAS\Public\Software\Backup\Test\z.xml'
with open(fn0, 'rb') as f:
h0 = hashlib.md5(f.read())
print(h0.hexdigest())
with open(fn1, 'rb') as f:
h1 = hashlib.md5(f.read())
print(h1.hexdigest())
if __name__ == '__main__':
sys.exit(main())
Running this results in two different hash values (as expected):
c:\src\python>python hashtest.py
8af202dffb88739c2dbe188c12291e3d
2ff3db61ff37ca5ceac6a59fd7c1018b
If reading the file contents returns different data for the remote files then passing that data into md5 has to result in different hash values. You might want to print out the first 80 bytes of each file as a check that you are getting what you expect.

Python file.write() taking two tries?

not sure how to explain this, any help will be appreciated!
Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2, pynotify, tempfile, os
>>> opener = urllib2.build_opener()
>>> page = opener.open('http://img.youtube.com/vi/RLGJU_xUVTs/1.jpg')
>>> thumb = page.read()
>>> temp = tempfile.NamedTemporaryFile(suffix='.jpg')
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
0
>>> temp.write(thumb)
>>> os.path.getsize(temp.name)
4096
thanks!
If you open the thumb file, you'll see that there's one whole copy and a partial copy of the data that you are writing in it.
flush the file instead of writing a second time.
temp.flush()
The file wasn't writing the first time because the contents aren't large enough to fill the buffer. The second write overfills the buffer and so a buffer's worth of data gets written.
As Cameron points out in his answer, the buffer is automatically flushed when you close the file. If you want to keep it open for some reason (and the fact that this is an issue for you seems to indicate that you do), then you can call flush and the data will be written right away.
You haven't called flush() or close() on the file object before checking its size on disk -- there is an internal buffer that is automatically flushed only after a certain amount of data is written (this avoids too many expensive trips to the disk when doing many writes).

Is Python's seek() on OS X broken?

I'm trying to implement a simple method to read new lines from a log file each time the method is called.
I've looked at the various suggestions both on stackoverflow (e.g. here) and elsewhere for simulating "tail" functionality; most involve using readline() to read in new lines as they're appended to the file. It should be simple enough, but can't get it to work properly on OS X 10.6.4 with the included Python 2.6.1.
To get to the heart of the problem, I tried the following:
Open two terminal windows.
In one, create a text file "test.log" with three lines:
one
two
three
In the other, start python and execute the following code:
Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.stat('test.log')
posix.stat_result(st_mode=33188, st_ino=23465217, st_dev=234881025L, st_nlink=1, st_uid=666, st_gid=20, st_size=14, st_atime=1281782739, st_mtime=1281782738, st_ctime=1281782738)
>>> log = open('test.log')
>>> log.tell()
0
>>> log.seek(0,2)
>>> log.tell()
14
>>>
So we see with the tell() that seek(0,2) brought us to the end of the file as reported by os.stat(), byte 14.
In the first shell, add another two lines to "test.log" so that it looks like this:
one
two
three
four
five
Go back to the second shell, and execute the following code:
>>> os.stat('test.log')
posix.stat_result(st_mode=33188, st_ino=23465260, st_dev=234881025L, st_nlink=1, st_uid=666, st_gid=20, st_size=24, st_atime=1281783089, st_mtime=1281783088, st_ctime=1281783088)
>>> log.seek(0,2)
>>> log.tell()
14
>>>
Here we see from os.stat() that the file's size is now 24 bytes, but seeking to the end of the file somehow still points to byte 14?? I've tried the same on Ubuntu with Python 2.5 and it works as I expect. I tried with 2.5 on my Mac, but got the same results as with 2.6.
I must be missing something fundamental here. Any ideas?
How are you adding two more lines to the file?
Most text editors will go through operations a lot like this:
fd = open(filename, read)
file_data = read(fd)
close(fd)
/* you edit your file, and save it */
unlink(filename)
fd = open(filename, write, create)
write(fd, file_data)
The file is different. (Check it with ls -li; the inode number will change for almost every text editor.)
If you append to the log file using your shell's >> redirection, it'll work exactly as it should:
$ echo one >> test.log
$ echo two >> test.log
$ echo three >> test.log
$ ls -li test.log
671147 -rw-r--r-- 1 sarnold sarnold 14 2010-08-14 04:15 test.log
$ echo four >> test.log
$ ls -li test.log
671147 -rw-r--r-- 1 sarnold sarnold 19 2010-08-14 04:15 test.log
>>> log=open('test.log')
>>> log.tell()
0
>>> log.seek(0,2)
>>> log.tell()
19
$ echo five >> test.log
$ echo six >> test.log
>>> log.seek(0,2)
>>> log.tell()
28
Note that the tail(1) command has an -F command line option to handle the case where the file is changed, but a file by the same name exists. (Great for watching log files that might be periodically rotated.)
Short answer: no, your assumptions are.
Your text editor is creating a new file with the same name, not modifying the old file in place. You can see in your stat result that the st_ino is different. If you were to do os.fstat(log.fileno()), you'd get the old size and old st_ino.
If you want to check for this in your implementation of tail, periodically compare the st_ino of the stat and fstat results. If they differ, there's a new file with the same name.

Categories

Resources