Related
Evening,
I am working on a project that requires me to read in multichannel wav files in 32-bit float.
When I read a specific file (1 minute long, 6 channels, 48k fs) in into Matlab and measure it with tic/toc it parses the file in 2.456482 seconds.
Matlab Code for file reading speed measurement
tic
wavread('C:/data/testData/6ch.wav');
toc
When I do it in python (mind you, I'm pretty unfamiliar with python) it takes 18.1655315617 seconds!
It seems to me like the way I am doing it is inefficient (I did get it down to 18 from 28 but it's still too much...)
I stripped the code to what is relevant to this subject:
Python Code for file reading speed measurement
import wave32
import struct
import time
import numpy as np
def getWavData(inFile)
wavFile = wave32.open(inFile, 'r')
wavParams = wavFile.getparams()
nChannels = wavParams[0]
byteDepth = wavParams[1]
nFrames = wavParams[3]
wavData = np.empty([nFrames, nChannels], np.float32)
frames = wavFile.readframes(nFrames)
for i in range(nFrames):
for j in range(nChannels):
start = ( i * nChannels + j ) * byteDepth
stop = start + byteDepth
wavData[i][j] = struct.unpack('<f', frames[start:stop])[0]
return wavData
inFile = 'C:/data/testData/6ch.wav'
start = time.clock()
data2 = getWavData(inFile)
elapsed = time.clock()
elapsedNew = elapsed - start
print str(elapsedNew)
please not that wav32 is a small hack I had to perform on wave.py to enable 32-bit float reading.
"""Stuff to parse WAVE files.
Usage.
Reading WAVE files:
f = wave.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
When the setpos() and rewind() methods are not used, the seek()
method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for linear samples)
getcompname() -- returns human-readable version of
compression type ('not compressed' linear samples)
getparams() -- returns a tuple consisting of all of the
above in the above order
getmarkers() -- returns None (for compatibility with the
aifc module)
getmark(id) -- raises an error since the mark does not
exist (for compatibility with the aifc module)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell() and the position given to setpos()
are compatible and have nothing to do with the actual position in the
file.
The close() method is called automatically when the class instance
is destroyed.
Writing WAVE files:
f = wave.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
tell() -- return current position in output file
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
The close() method is called automatically when the class instance
is destroyed.
"""
import __builtin__
__all__ = ["open", "openfp", "Error"]
class Error(Exception):
pass
WAVE_FORMAT_PCM = 0x0001
WAVE_FORMAT_IEEE_FLOAT = 0x0003
_array_fmts = None, 'b', 'h', None, 'l'
# Determine endian-ness
import struct
if struct.pack("h", 1) == "\000\001":
big_endian = 1
else:
big_endian = 0
from chunk import Chunk
class Wave_read:
"""Variables used in this class:
These variables are available to the user though appropriate
methods of this class:
_file -- the open file with methods read(), close(), and seek()
set through the __init__() method
_nchannels -- the number of audio channels
available through the getnchannels() method
_nframes -- the number of audio frames
available through the getnframes() method
_sampwidth -- the number of bytes per audio sample
available through the getsampwidth() method
_framerate -- the sampling frequency
available through the getframerate() method
_comptype -- the AIFF-C compression type ('NONE' if AIFF)
available through the getcomptype() method
_compname -- the human-readable AIFF-C compression type
available through the getcomptype() method
_soundpos -- the position in the audio stream
available through the tell() method, set through the
setpos() method
These variables are used internally only:
_fmt_chunk_read -- 1 iff the FMT chunk has been read
_data_seek_needed -- 1 iff positioned correctly in audio
file for readframes()
_data_chunk -- instantiation of a chunk class for the DATA chunk
_framesize -- size of one frame in the file
"""
def initfp(self, file):
self._convert = None
self._soundpos = 0
self._file = Chunk(file, bigendian = 0)
if self._file.getname() != 'RIFF':
raise Error, 'file does not start with RIFF id'
if self._file.read(4) != 'WAVE':
raise Error, 'not a WAVE file'
self._fmt_chunk_read = 0
self._data_chunk = None
while 1:
self._data_seek_needed = 1
try:
chunk = Chunk(self._file, bigendian = 0)
except EOFError:
break
chunkname = chunk.getname()
if chunkname == 'fmt ':
self._read_fmt_chunk(chunk)
self._fmt_chunk_read = 1
elif chunkname == 'data':
if not self._fmt_chunk_read:
raise Error, 'data chunk before fmt chunk'
self._data_chunk = chunk
self._nframes = chunk.chunksize // self._framesize
self._data_seek_needed = 0
break
chunk.skip()
if not self._fmt_chunk_read or not self._data_chunk:
raise Error, 'fmt chunk and/or data chunk missing'
def __init__(self, f):
self._i_opened_the_file = None
if isinstance(f, basestring):
f = __builtin__.open(f, 'rb')
self._i_opened_the_file = f
# else, assume it is an open file object already
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
def __del__(self):
self.close()
#
# User visible methods.
#
def getfp(self):
return self._file
def rewind(self):
self._data_seek_needed = 1
self._soundpos = 0
def close(self):
if self._i_opened_the_file:
self._i_opened_the_file.close()
self._i_opened_the_file = None
self._file = None
def tell(self):
return self._soundpos
def getnchannels(self):
return self._nchannels
def getnframes(self):
return self._nframes
def getsampwidth(self):
return self._sampwidth
def getframerate(self):
return self._framerate
def getcomptype(self):
return self._comptype
def getcompname(self):
return self._compname
def getparams(self):
return self.getnchannels(), self.getsampwidth(), \
self.getframerate(), self.getnframes(), \
self.getcomptype(), self.getcompname()
def getmarkers(self):
return None
def getmark(self, id):
raise Error, 'no marks'
def setpos(self, pos):
if pos < 0 or pos > self._nframes:
raise Error, 'position not in range'
self._soundpos = pos
self._data_seek_needed = 1
def readframes(self, nframes):
if self._data_seek_needed:
self._data_chunk.seek(0, 0)
pos = self._soundpos * self._framesize
if pos:
self._data_chunk.seek(pos, 0)
self._data_seek_needed = 0
if nframes == 0:
return ''
if self._sampwidth > 1 and big_endian:
# unfortunately the fromfile() method does not take
# something that only looks like a file object, so
# we have to reach into the innards of the chunk object
import array
chunk = self._data_chunk
data = array.array(_array_fmts[self._sampwidth])
nitems = nframes * self._nchannels
if nitems * self._sampwidth > chunk.chunksize - chunk.size_read:
nitems = (chunk.chunksize - chunk.size_read) / self._sampwidth
data.fromfile(chunk.file.file, nitems)
# "tell" data chunk how much was read
chunk.size_read = chunk.size_read + nitems * self._sampwidth
# do the same for the outermost chunk
chunk = chunk.file
chunk.size_read = chunk.size_read + nitems * self._sampwidth
data.byteswap()
data = data.tostring()
else:
data = self._data_chunk.read(nframes * self._framesize)
if self._convert and data:
data = self._convert(data)
self._soundpos = self._soundpos + len(data) // (self._nchannels * self._sampwidth)
return data
#
# Internal methods.
#
def _read_fmt_chunk(self, chunk):
wFormatTag, self._nchannels, self._framerate, dwAvgBytesPerSec, wBlockAlign = struct.unpack('<hhllh', chunk.read(14))
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag==WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
#sampwidth = struct.unpack('<h', chunk.read(2))[0]
#self._sampwidth = (sampwidth + 7) // 8
raise Error, 'unknown format: %r' % (wFormatTag,)
self._framesize = self._nchannels * self._sampwidth
self._comptype = 'NONE'
self._compname = 'not compressed'
class Wave_write:
"""Variables used in this class:
These variables are user settable through appropriate methods
of this class:
_file -- the open file with methods write(), close(), tell(), seek()
set through the __init__() method
_comptype -- the AIFF-C compression type ('NONE' in AIFF)
set through the setcomptype() or setparams() method
_compname -- the human-readable AIFF-C compression type
set through the setcomptype() or setparams() method
_nchannels -- the number of audio channels
set through the setnchannels() or setparams() method
_sampwidth -- the number of bytes per audio sample
set through the setsampwidth() or setparams() method
_framerate -- the sampling frequency
set through the setframerate() or setparams() method
_nframes -- the number of audio frames written to the header
set through the setnframes() or setparams() method
These variables are used internally only:
_datalength -- the size of the audio samples written to the header
_nframeswritten -- the number of frames actually written
_datawritten -- the size of the audio samples actually written
"""
def __init__(self, f):
self._i_opened_the_file = None
if isinstance(f, basestring):
f = __builtin__.open(f, 'wb')
self._i_opened_the_file = f
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
def initfp(self, file):
self._file = file
self._convert = None
self._nchannels = 0
self._sampwidth = 0
self._framerate = 0
self._nframes = 0
self._nframeswritten = 0
self._datawritten = 0
self._datalength = 0
self._headerwritten = False
def __del__(self):
self.close()
#
# User visible methods.
#
def setnchannels(self, nchannels):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if nchannels < 1:
raise Error, 'bad # of channels'
self._nchannels = nchannels
def getnchannels(self):
if not self._nchannels:
raise Error, 'number of channels not set'
return self._nchannels
def setsampwidth(self, sampwidth):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if sampwidth < 1 or sampwidth > 4:
raise Error, 'bad sample width'
self._sampwidth = sampwidth
def getsampwidth(self):
if not self._sampwidth:
raise Error, 'sample width not set'
return self._sampwidth
def setframerate(self, framerate):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if framerate <= 0:
raise Error, 'bad frame rate'
self._framerate = framerate
def getframerate(self):
if not self._framerate:
raise Error, 'frame rate not set'
return self._framerate
def setnframes(self, nframes):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
self._nframes = nframes
def getnframes(self):
return self._nframeswritten
def setcomptype(self, comptype, compname):
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
if comptype not in ('NONE',):
raise Error, 'unsupported compression type'
self._comptype = comptype
self._compname = compname
def getcomptype(self):
return self._comptype
def getcompname(self):
return self._compname
def setparams(self, params):
nchannels, sampwidth, framerate, nframes, comptype, compname = params
if self._datawritten:
raise Error, 'cannot change parameters after starting to write'
self.setnchannels(nchannels)
self.setsampwidth(sampwidth)
self.setframerate(framerate)
self.setnframes(nframes)
self.setcomptype(comptype, compname)
def getparams(self):
if not self._nchannels or not self._sampwidth or not self._framerate:
raise Error, 'not all parameters set'
return self._nchannels, self._sampwidth, self._framerate, \
self._nframes, self._comptype, self._compname
def setmark(self, id, pos, name):
raise Error, 'setmark() not supported'
def getmark(self, id):
raise Error, 'no marks'
def getmarkers(self):
return None
def tell(self):
return self._nframeswritten
def writeframesraw(self, data):
self._ensure_header_written(len(data))
nframes = len(data) // (self._sampwidth * self._nchannels)
if self._convert:
data = self._convert(data)
if self._sampwidth > 1 and big_endian:
import array
data = array.array(_array_fmts[self._sampwidth], data)
data.byteswap()
data.tofile(self._file)
self._datawritten = self._datawritten + len(data) * self._sampwidth
else:
self._file.write(data)
self._datawritten = self._datawritten + len(data)
self._nframeswritten = self._nframeswritten + nframes
def writeframes(self, data):
self.writeframesraw(data)
if self._datalength != self._datawritten:
self._patchheader()
def close(self):
if self._file:
self._ensure_header_written(0)
if self._datalength != self._datawritten:
self._patchheader()
self._file.flush()
self._file = None
if self._i_opened_the_file:
self._i_opened_the_file.close()
self._i_opened_the_file = None
#
# Internal methods.
#
def _ensure_header_written(self, datasize):
if not self._headerwritten:
if not self._nchannels:
raise Error, '# channels not specified'
if not self._sampwidth:
raise Error, 'sample width not specified'
if not self._framerate:
raise Error, 'sampling rate not specified'
self._write_header(datasize)
def _write_header(self, initlength):
assert not self._headerwritten
self._file.write('RIFF')
if not self._nframes:
self._nframes = initlength / (self._nchannels * self._sampwidth)
self._datalength = self._nframes * self._nchannels * self._sampwidth
self._form_length_pos = self._file.tell()
self._file.write(struct.pack('<l4s4slhhllhh4s',
36 + self._datalength, 'WAVE', 'fmt ', 16,
WAVE_FORMAT_PCM, self._nchannels, self._framerate,
self._nchannels * self._framerate * self._sampwidth,
self._nchannels * self._sampwidth,
self._sampwidth * 8, 'data'))
self._data_length_pos = self._file.tell()
self._file.write(struct.pack('<l', self._datalength))
self._headerwritten = True
def _patchheader(self):
assert self._headerwritten
if self._datawritten == self._datalength:
return
curpos = self._file.tell()
self._file.seek(self._form_length_pos, 0)
self._file.write(struct.pack('<l', 36 + self._datawritten))
self._file.seek(self._data_length_pos, 0)
self._file.write(struct.pack('<l', self._datawritten))
self._file.seek(curpos, 0)
self._datalength = self._datawritten
def open(f, mode=None):
if mode is None:
if hasattr(f, 'mode'):
mode = f.mode
else:
mode = 'rb'
if mode in ('r', 'rb'):
return Wave_read(f)
elif mode in ('w', 'wb'):
return Wave_write(f)
else:
raise Error, "mode must be 'r', 'rb', 'w', or 'wb'"
openfp = open # B/W compatibility
Sorry for the long code BTW :)
So my question is: is the wave.py module inherently slow (any alternatives to fix this?) or am I doing something inefficient?
I suppose I could just read in the wav header with a custom function and read the file in in a different way, but it seems like this is going to be A LOT of work, especially since I don't know a lot about 1) python and 2) file handling
Kind regards,
K.
Edit: I tried unutbu's suggestion but that does not work as scipy does not accept >16 bit.
When I try to parse the wav file through the scipy wavreader I get this message:
C:\Users\King Broos\AppData\Local\Enthought\Canopy32\System\lib\site-packages\scipy\io\wavfile.py:31: WavFileWarning: Unfamiliar format bytes
warnings.warn("Unfamiliar format bytes", WavFileWarning)
C:\Users\King Broos\AppData\Local\Enthought\Canopy32\System\lib\site-packages\scipy\io\wavfile.py:121: WavFileWarning: chunk not understood
warnings.warn("chunk not understood", WavFileWarning)
Looking into the code of wavfile.py this is the line where it throws the exception:
if (comp != 1 or size > 16):
warnings.warn("Unfamiliar format bytes", WavFileWarning)
I really need either 24 or 32 bit so I guess scipy not an option?
If you can install or have scipy, then use wavfile.read:
import scipy.io.wavfile as wavfile
sample_rate, x = wavfile.read(filename)
You might also want to study the source code, here.
Note that scipy.io.wavfile does not use Python's wave module. I'm not sure if it reads your IEEE_FLOAT format or not, but it does not do the same check as wave.py:
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag==WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
#sampwidth = struct.unpack('<h', chunk.read(2))[0]
#self._sampwidth = (sampwidth + 7) // 8
raise Error, 'unknown format: %r' % (wFormatTag,)
so perhaps it will work out-of-the-box.
By the way, instead of making your own module, wave32.py which is almost exactly the same as wave.py from the standard library, you could use monkey-patching:
import wave
import struct
WAVE_FORMAT_IEEE_FLOAT = 0x0003
def _read_fmt_chunk(self, chunk):
wFormatTag, self._nchannels, self._framerate, dwAvgBytesPerSec, wBlockAlign = struct.unpack('<hhllh', chunk.read(14))
if wFormatTag == WAVE_FORMAT_PCM or wFormatTag == WAVE_FORMAT_IEEE_FLOAT:
sampwidth = struct.unpack('<h', chunk.read(2))[0]
self._sampwidth = (sampwidth + 7) // 8
else:
raise Error, 'unknown format: %r' % (wFormatTag,)
self._framesize = self._nchannels * self._sampwidth
self._comptype = 'NONE'
self._compname = 'not compressed'
wave.Wave_read._read_fmt_chunk = _read_fmt_chunk
You can also use numpy directly:
import numpy as np
fs = np.fromfile(filename, dtype=np.int32, count=1, offset=24)[0] # Hz
byte_length = np.fromfile(filename, dtype=np.int32, count=1, offset=40)[0]
To manually read pieces of metadata. I recommend using a hex editor and wave format reference to verify the locations for pieces of metadata and the offset to the start of the data chunk (might not be 40 or 44 bytes in).
To read 32-bit WAVE_FORMAT_IEEE_FLOAT:
data = np.fromfile(filename, dtype=np.float32, count=byte_length // 4, offset=44)
To read 24-bit WAVE_FORMAT_PCM:
# prepend zero-byte to each sample (since there's no np.int24)
# then flatten, convert normally and byte-shift to correct for extra byte
data = np.zeros([byte_length // 3, 4], dtype=np.int8)
data[:, 1:] = np.fromfile(filename, dtype=np.int8, count=byte_length, offset=44).reshape(-1, 3)
data = np.right_shift(data.reshape(-1).view(dtype=np.int32), 8)
data = data / 2 ** 23 # if you want to normalize
Depends on the wavefile and machine, but this seems to be ~120 times faster than a loop for a 4.4 MB 24-bit .wav file, but there's likely bigger performance gains for bigger files (until swap is required, I think there's ~5 memory copies performed, including normalization).
This assumes:
No extra chunks at the start of the file, else offset= parameters are wrong
Single channel - reshape the array and/or change the byte order for multi-channel, with something like .reshape(num_channels, -1, order='F')
Little-endian I think
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I need to make a program that drives a DYMO LabelManager PnP label printing device. DYMO provides a SDK for this purpose, but after some desperate trying, I'd say the SDK is useless. Then I found a program which is just what I need, written by a guy named S.Bronner. But the problem is that his program is made for Python in UNIX, and I would need it to work in Windows with python. So I'm asking, is there anyone who could examine this code and convert it to work in windows for me? My Python skills are not good enough to accomplish this. Here is the code which should be converted:
#!/usr/bin/env python
DEV_CLASS = 3
DEV_VENDOR = 0x0922
DEV_PRODUCT = 0x1001
DEV_NODE = None
DEV_NAME = 'Dymo LabelManager PnP'
FONT_FILENAME = '/usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf'
FONT_SIZERATIO = 7./8
import Image
import ImageDraw
import ImageFont
import array
import fcntl
import os
import re
import struct
import subprocess
import sys
import termios
import textwrap
class DymoLabeler:
"""
Create and work with a Dymo LabelManager PnP object.
This class contains both mid-level and high-level functions. In general,
the high-level functions should be used. However, special purpose usage
may require the mid-level functions. That is why they are provided.
However, they should be well understood before use. Look at the
high-level functions for help. Each function is marked in its docstring
with 'HLF' or 'MLF' in parentheses.
"""
def __init__(self, dev):
"""Initialize the LabelManager object. (HLF)"""
self.maxBytesPerLine = 8 # 64 pixels on a 12mm-tape
self.ESC = 0x1b
self.SYN = 0x16
self.cmd = []
self.rsp = False
self.bpl = None
self.dtb = 0
if not os.access(dev, os.R_OK | os.W_OK): return False
self.dev = open(dev, 'r+')
def sendCommand(self):
"""Send the already built command to the LabelManager. (MLF)"""
if len(self.cmd) == 0: return
cmdBin = array.array('B', self.cmd)
cmdBin.tofile(self.dev)
self.cmd = []
if not self.rsp: return
self.rsp = False
rspBin = self.dev.read(8)
rsp = array.array('B', rspBin).tolist()
return rsp
def resetCommand(self):
"""Remove a partially built command. (MLF)"""
self.cmd = []
self.rsp = False
def buildCommand(self, cmd):
"""Add the next instruction to the command. (MLF)"""
self.cmd += cmd
def statusRequest(self):
"""Set instruction to get the device's status. (MLF)"""
cmd = [self.ESC, ord('A')]
self.buildCommand(cmd)
self.rsp = True
def dotTab(self, value):
"""Set the bias text height, in bytes. (MLF)"""
if value < 0 or value > self.maxBytesPerLine: raise ValueError
cmd = [self.ESC, ord('B'), value]
self.buildCommand(cmd)
self.dtb = value
self.bpl = None
def tapeColor(self, value):
"""Set the tape color. (MLF)"""
if value < 0: raise ValueError
cmd = [self.ESC, ord('C'), value]
self.buildCommand(cmd)
def bytesPerLine(self, value):
"""Set the number of bytes sent in the following lines. (MLF)"""
if value < 0 or value + self.dtb > self.maxBytesPerLine: raise ValueError
if value == self.bpl: return
cmd = [self.ESC, ord('D'), value]
self.buildCommand(cmd)
self.bpl = value
def cut(self):
"""Set instruction to trigger cutting of the tape. (MLF)"""
cmd = [self.ESC, ord('E')]
self.buildCommand(cmd)
def line(self, value):
"""Set next printed line. (MLF)"""
self.bytesPerLine(len(value))
cmd = [self.SYN] + value
self.buildCommand(cmd)
def chainMark(self):
"""Set Chain Mark. (MLF)"""
self.dotTab(0)
self.bytesPerLine(self.maxBytesPerLine)
self.line([0x99] * self.maxBytesPerLine)
def skipLines(self, value):
"""Set number of lines of white to print. (MLF)"""
if value <= 0: raise ValueError
self.bytesPerLine(0)
cmd = [self.SYN] * value
self.buildCommand(cmd)
def initLabel(self):
"""Set the label initialization sequence. (MLF)"""
cmd = [0x00] * 8
self.buildCommand(cmd)
def getStatus(self):
"""Ask for and return the device's status. (HLF)"""
self.statusRequest()
rsp = self.sendCommand()
print rsp
def printLabel(self, lines, dotTab):
"""Print the label described by lines. (HLF)"""
self.initLabel
self.tapeColor(0)
self.dotTab(dotTab)
for line in lines:
self.line(line)
self.skipLines(56) # advance printed matter past cutter
self.skipLines(56) # add symmetric margin
self.statusRequest()
rsp = self.sendCommand()
print rsp
def die(message=None):
if message: print >> sys.stderr, message
sys.exit(1)
def pprint(par, fd=sys.stdout):
rows, columns = struct.unpack('HH', fcntl.ioctl(sys.stderr, termios.TIOCGWINSZ, struct.pack('HH', 0, 0)))
print >> fd, textwrap.fill(par, columns)
def getDeviceFile(classID, vendorID, productID):
# find file containing the device's major and minor numbers
searchdir = '/sys/bus/hid/devices'
pattern = '^%04d:%04X:%04X.[0-9A-F]{4}$' % (classID, vendorID, productID)
deviceCandidates = os.listdir(searchdir)
foundpath = None
for devname in deviceCandidates:
if re.match(pattern, devname):
foundpath = os.path.join(searchdir, devname)
break
if not foundpath: return
searchdir = os.path.join(foundpath, 'hidraw')
devname = os.listdir(searchdir)[0]
foundpath = os.path.join(searchdir, devname)
filepath = os.path.join(foundpath, 'dev')
# get the major and minor numbers
f = open(filepath, 'r')
devnums = [int(n) for n in f.readline().strip().split(':')]
f.close()
devnum = os.makedev(devnums[0], devnums[1])
# check if a symlink with the major and minor numbers is available
filepath = '/dev/char/%d:%d' % (devnums[0], devnums[1])
if os.path.exists(filepath):
return os.path.realpath(filepath)
# check if the relevant sysfs path component matches a file name in
# /dev, that has the proper major and minor numbers
filepath = os.path.join('/dev', devname)
if os.stat(filepath).st_rdev == devnum:
return filepath
# search for a device file with the proper major and minor numbers
for dirpath, dirnames, filenames in os.walk('/dev'):
for filename in filenames:
filepath = os.path.join(dirpath, filename)
if os.stat(filepath).st_rdev == devnum:
return filepath
def access_error(dev):
pprint('You do not have sufficient access to the device file %s:' % dev, sys.stderr)
subprocess.call(['ls', '-l', dev], stdout=sys.stderr)
print >> sys.stderr
pprint('You probably want to add a rule in /etc/udev/rules.d along the following lines:', sys.stderr)
print >> sys.stderr, ' SUBSYSTEM=="hidraw", \\'
print >> sys.stderr, ' ACTION=="add", \\'
print >> sys.stderr, ' DEVPATH=="/devices/pci[0-9]*/usb[0-9]*/0003:0922:1001.*/hidraw/hidraw0", \\'
print >> sys.stderr, ' GROUP="plugdev"'
print >> sys.stderr
pprint('Following that, turn off your device and back on again to activate the new permissions.', sys.stderr)
# get device file name
if not DEV_NODE:
dev = getDeviceFile(DEV_CLASS, DEV_VENDOR, DEV_PRODUCT)
else:
dev = DEV_NODE
if not dev: die("The device '%s' could not be found on this system." % DEV_NAME)
# create dymo labeler object
lm = DymoLabeler(dev)
if not lm: die(access_error(dev))
# check for any text specified on the command line
labeltext = [arg.decode(sys.stdin.encoding) for arg in sys.argv[1:]]
if len(labeltext) == 0: die("No label text was specified.")
# create an empty label image
labelheight = lm.maxBytesPerLine * 8
lineheight = float(labelheight) / len(labeltext)
fontsize = int(round(lineheight * FONT_SIZERATIO))
font = ImageFont.truetype(FONT_FILENAME, fontsize)
labelwidth = max(font.getsize(line)[0] for line in labeltext)
labelbitmap = Image.new('1', (labelwidth, labelheight))
# write the text into the empty image
labeldraw = ImageDraw.Draw(labelbitmap)
for i, line in enumerate(labeltext):
lineposition = int(round(i * lineheight))
labeldraw.text((0, lineposition), line, font=font, fill=255)
del labeldraw
# convert the image to the proper matrix for the dymo labeler object
labelrotated = labelbitmap.transpose(Image.ROTATE_270)
labelstream = labelrotated.tostring()
labelstreamrowlength = labelheight/8 + (1 if labelheight%8 != 0 else 0)
if len(labelstream)/labelstreamrowlength != labelwidth: die('An internal problem was encountered while processing the label bitmap!')
labelrows = [labelstream[i:i+labelstreamrowlength] for i in range(0, len(labelstream), labelstreamrowlength)]
labelmatrix = [array.array('B', labelrow).tolist() for labelrow in labelrows]
# optimize the matrix for the dymo label printer
dottab = 0
while max(line[0] for line in labelmatrix) == 0:
labelmatrix = [line[1:] for line in labelmatrix]
dottab += 1
for line in labelmatrix:
while len(line) > 0 and line[-1] == 0:
del line[-1]
# print the label
lm.printLabel(labelmatrix, dottab)
FONT_FILENAME = '/usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf'
// should be changed to path to the font on your system
won't work because of filesystem differences.
searchdir = '/sys/bus/hid/devices'
// take a look at "pywinusb" library (?)
won't work either, you have to get the devices in a different way. Not sure from where though. The same problem is
filepath = '/dev/char/%d:%d' % (devnums[0], devnums[1])
this isn't accessible in Windows and you have to do in a different way.
Besides that everything else looks OS independent. If you have any errors after fixing previous 3 problems, then edit them into your question please.
I'm trying to use http://grantcox.com.au/2012/01/decoding-b4u-binary-file-format/ Python code to export .b4u files to HTML format, but for some reason after at the program point:
# find the initial caret position - this changes between files for some reason - search for the "Cards" string
for i in range(3):
addr = 104 + i*4
if ''.join(self.parser.read('sssss', addr)) == 'Cards':
caret = addr + 32
break
if caret is None:
return
I get the following error:
if ''.join(self.parser.read('sssss', addr)) == 'Cards':
TypeError: sequence item 0: expected str instance, bytes found
The Python version I'm using is: Python 3.3.1 (v3.3.1:d9893d13c628, Apr 6 2013, 20:25:12).
Any idea how to solve that problem?
I have got it working under Python 2.7.4 My Python 3.3.2 is giving me the same error. I'll get back to you if I find out how to port this piece of code to Python 3.x.x Must have something to do with unicode being default for strings in Python 3.
Here is a solution I came up with:
def read(self, fmt, offset):
if self.filedata is None:
return None
read = struct.unpack_from('<' + fmt, self.filedata, offset)
xread = []
for each in range(0,len(read)):
try:
xread.append(read[each].decode())
except:
xread.append(read[each])
read = xread
if len(read) == 1:
return read[0]
return read
def string(self, offset):
if self.filedata is None:
return None
s = u''
if offset > 0:
length = self.read('H', offset)
for i in range(length):
raw = self.read('H', offset + i*2 +2)
char = raw ^ 0x7E
s = s + chr(char)
return s
def plain_fixed_string(self, offset):
if self.filedata is None:
return None
plain_bytes = struct.unpack_from('<ssssssssssssssssssssssss', self.filedata, offset)
xplain_bytes = []
for each in range(0,len(plain_bytes)):
try:
xplain_bytes.append(plain_bytes[each].decode())
except:
xplain_bytes.append(plain_bytes[each])
plain_bytes = xplain_bytes
plain_string = ''.join(plain_bytes).strip('\0x0')
return plain_string
You can just use these methods instead of provided by original author.
Beware that you should also change unicode() to str() and unichr() to chr() if you see it anywhere. Also remember that print is a function and cannot be used without brackets ().
I am writing scripts to process (very large) files by repeatedly unpickling objects until EOF. I would like to partition the file and have separate processes (in the cloud) unpickle and process separate parts.
However my partitioner is not intelligent, it does not know about the boundaries between pickled objects in the file (since those boundaries depend on the object types being pickled, etc.).
Is there a way to scan a file for a "start pickled object" sentinel? The naive way would be to attempt unpickling at successive byte offsets until an object is successfully pickled, but that yields unexpected errors. It seems that for certain combinations of input, the unpickler falls out of sync and returns nothing for the rest of the file (see code below).
import cPickle
import os
def stream_unpickle(file_obj):
while True:
start_pos = file_obj.tell()
try:
yield cPickle.load(file_obj)
except (EOFError, KeyboardInterrupt):
break
except (cPickle.UnpicklingError, ValueError, KeyError, TypeError, ImportError):
file_obj.seek(start_pos+1, os.SEEK_SET)
if __name__ == '__main__':
import random
from StringIO import StringIO
# create some data
sio = StringIO()
[cPickle.dump(random.random(), sio, cPickle.HIGHEST_PROTOCOL) for _ in xrange(1000)]
sio.flush()
# read from subsequent offsets and find discontinuous jumps in object count
size = sio.tell()
last_count = None
for step in xrange(size):
sio.seek(step, os.SEEK_SET)
count = sum(1 for _ in stream_unpickle(file_obj))
if last_count is None or count == last_count - 1:
last_count = count
elif count != last_count:
# if successful, these should never print (but they do...)
print '%d elements read from byte %d' % (count, step)
print '(%d elements read from byte %d)' % (last_count, step-1)
last_count = count
The pickletools module has a dis function that shows the opcodes. It shows that there is a STOP opcode that you may be scan for:
>>> import pickle, pickletools, StringIO
>>> s = StringIO.StringIO()
>>> pickle.dump('abc', s)
>>> p = s.getvalue()
>>> pickletools.dis(p)
0: S STRING 'abc'
7: p PUT 0
10: . STOP
highest protocol among opcodes = 0
Note, using the STOP opcode is a bit tricky because the codes are of variable length, but it may serve as a useful hint about where the cutoffs are.
If you control the pickling step on the other end, then you can improve the situation by adding your own unambiguous alternative separator:
>>> sep = '\xDE\xAD\xBE\xEF'
>>> s = StringIO.StringIO()
>>> pickle.dump('abc', s)
>>> s.write(sep)
>>> pickle.dump([10, 20], s)
>>> s.write(sep)
>>> pickle.dump('def', s)
>>> s.write(sep)
>>> pickle.dump([30, 40], s)
>>> p = s.getvalue()
Before unpacking, split into separate pickles using the known separator:
>>> for pick in p.split(sep):
print pickle.loads(pick)
abc
[10, 20]
def
[30, 40]
In the pickled file, some opcodes have an argument -- a data value that follows the opcode. The data values vary in length, and can contain bytes identical to opcodes. Therefore, if you start reading the file from an arbitrary position, you have no way of knowing if you are looking at an opcode or in the middle of an argument. You must read the file from beginning and parse the opcodes.
I cooked up this function that skips one pickle from a file, i.e. reads it and parses opcodes, but does not construct the objects. It seems slightly faster than cPickle.loads on some files I have. You could rewrite this in C for more speed. (after testing this properly)
Then, you can make one pass over the whole file to get the seek position of each pickle.
from pickletools import code2op, UP_TO_NEWLINE, TAKEN_FROM_ARGUMENT1, TAKEN_FROM_ARGUMENT4
from marshal import loads as mloads
def skip_pickle(f):
"""Skip one pickle from file.
'f' is a file-like object containing the pickle.
"""
while True:
code = f.read(1)
if not code:
raise EOFError
opcode = code2op[code]
if opcode.arg is not None:
n = opcode.arg.n
if n > 0:
f.read(n)
elif n == UP_TO_NEWLINE:
f.readline()
elif n == TAKEN_FROM_ARGUMENT1:
n = ord(f.read(1))
f.read(n)
elif n == TAKEN_FROM_ARGUMENT4:
n = mloads('i' + f.read(4))
f.read(n)
if code == '.':
break
Sorry to answer my own question, and thanks to #RaymondHettinger for the idea of adding sentinels.
Here's what worked for me. I created readers and writers that use a sentinel '#S' followed by a data block length at the beginning of each 'record'. The writer has to take care to find any occurrences of '#' in the data being written and double them (into '##'). The reader then uses a look-behind regex to find sentinels, distinct from any matching values that might be in the original stream, and also verify the number of bytes between this sentinel and the subsequent one.
RecordWriter is a context manager (so multiple calls to write() can be encapsulated into a single record if needed). RecordReader is a generator.
Not sure how this is on performance. Any faster/elegant-er solutions are welcome.
import re
import cPickle
from functools import partial
from cStringIO import StringIO
SENTINEL = '#S'
# when scanning look for #S, but NOT ##S
sentinel_pattern = '(?<!#)#S' # uses negative look-behind
sentinel_re = re.compile(sentinel_pattern)
find_sentinel = sentinel_re.search
# when writing replace single # with double ##
write_pattern = '#'
write_re = re.compile(write_pattern)
fix_write = partial(write_re.sub, '##')
# when reading, replace double ## with single #
read_pattern = '##'
read_re = re.compile(read_pattern)
fix_read = partial(read_re.sub, '#')
class RecordWriter(object):
def __init__(self, stream):
self._stream = stream
self._write_buffer = None
def __enter__(self):
self._write_buffer = StringIO()
return self
def __exit__(self, et, ex, tb):
if self._write_buffer.tell():
self._stream.write(SENTINEL) # start
cPickle.dump(self._write_buffer.tell(), self._stream, cPickle.HIGHEST_PROTOCOL) # byte length of user's original data
self._stream.write(fix_write(self._write_buffer.getvalue()))
self._write_buffer = None
return False
def write(self, data):
if not self._write_buffer:
raise ValueError("Must use StreamWriter as a context manager")
self._write_buffer.write(data)
class BadBlock(Exception): pass
def verify_length(block):
fobj = StringIO(block)
try:
stated_length = cPickle.load(fobj)
except (ValueError, IndexError, cPickle.UnpicklingError):
raise BadBlock
data = fobj.read()
if len(data) != stated_length:
raise BadBlock
return data
def RecordReader(stream):
' Read one record '
accum = StringIO()
seen_sentinel = False
data = ''
while True:
m = find_sentinel(data)
if not m:
if seen_sentinel:
accum.write(data)
data = stream.read(80)
if not data:
if accum.tell():
try: yield verify_length(fix_read(accum.getvalue()))
except BadBlock: pass
return
else:
if seen_sentinel:
accum.write(data[:m.start()])
try: yield verify_length(fix_read(accum.getvalue()))
except BadBlock: pass
accum = StringIO()
else:
seen_sentinel = True
data = data[m.end():] # toss
if __name__ == '__main__':
import random
stream = StringIO()
data = [str(random.random()) for _ in xrange(3)]
# test with a string containing sentinel and almost-sentinel
data.append('abc12#jeoht38#SoSooihetS#')
count = len(data)
for i in data:
with RecordWriter(stream) as r:
r.write(i)
size = stream.tell()
start_pos = random.random() * size
stream.seek(start_pos, os.SEEK_SET)
read_data = [s for s in RecordReader(stream)]
print 'Original data: ', data
print 'After seeking to %d, RecordReader returned: %s' % (start_pos, read_data)
I'm trying to read the target file/directory of a shortcut (.lnk) file from Python. Is there a headache-free way to do it? The spec is way over my head.
I don't mind using Windows-only APIs.
My ultimate goal is to find the "(My) Videos" folder on Windows XP and Vista. On XP, by default, it's at %HOMEPATH%\My Documents\My Videos, and on Vista it's %HOMEPATH%\Videos. However, the user can relocate this folder. In the case, the %HOMEPATH%\Videos folder ceases to exists and is replaced by %HOMEPATH%\Videos.lnk which points to the new "My Videos" folder. And I want its absolute location.
Create a shortcut using Python (via WSH)
import sys
import win32com.client
shell = win32com.client.Dispatch("WScript.Shell")
shortcut = shell.CreateShortCut("t:\\test.lnk")
shortcut.Targetpath = "t:\\ftemp"
shortcut.save()
Read the Target of a Shortcut using Python (via WSH)
import sys
import win32com.client
shell = win32com.client.Dispatch("WScript.Shell")
shortcut = shell.CreateShortCut("t:\\test.lnk")
print(shortcut.Targetpath)
I know this is an older thread but I feel that there isn't much information on the method that uses the link specification as noted in the original question.
My shortcut target implementation could not use the win32com module and after a lot of searching, decided to come up with my own. Nothing else seemed to accomplish what I needed under my restrictions. Hopefully this will help other folks in this same situation.
It uses the binary structure Microsoft has provided for MS-SHLLINK.
import struct
path = 'myfile.txt.lnk'
target = ''
with open(path, 'rb') as stream:
content = stream.read()
# skip first 20 bytes (HeaderSize and LinkCLSID)
# read the LinkFlags structure (4 bytes)
lflags = struct.unpack('I', content[0x14:0x18])[0]
position = 0x18
# if the HasLinkTargetIDList bit is set then skip the stored IDList
# structure and header
if (lflags & 0x01) == 1:
position = struct.unpack('H', content[0x4C:0x4E])[0] + 0x4E
last_pos = position
position += 0x04
# get how long the file information is (LinkInfoSize)
length = struct.unpack('I', content[last_pos:position])[0]
# skip 12 bytes (LinkInfoHeaderSize, LinkInfoFlags, and VolumeIDOffset)
position += 0x0C
# go to the LocalBasePath position
lbpos = struct.unpack('I', content[position:position+0x04])[0]
position = last_pos + lbpos
# read the string at the given position of the determined length
size= (length + last_pos) - position - 0x02
temp = struct.unpack('c' * size, content[position:position+size])
target = ''.join([chr(ord(a)) for a in temp])
Alternatively, you could try using SHGetFolderPath(). The following code might work, but I'm not on a Windows machine right now so I can't test it.
import ctypes
shell32 = ctypes.windll.shell32
# allocate MAX_PATH bytes in buffer
video_folder_path = ctypes.create_string_buffer(260)
# 0xE is CSIDL_MYVIDEO
# 0 is SHGFP_TYPE_CURRENT
# If you want a Unicode path, use SHGetFolderPathW instead
if shell32.SHGetFolderPathA(None, 0xE, None, 0, video_folder_path) >= 0:
# success, video_folder_path now contains the correct path
else:
# error
Basically call the Windows API directly. Here is a good example found after Googling:
import os, sys
import pythoncom
from win32com.shell import shell, shellcon
shortcut = pythoncom.CoCreateInstance (
shell.CLSID_ShellLink,
None,
pythoncom.CLSCTX_INPROC_SERVER,
shell.IID_IShellLink
)
desktop_path = shell.SHGetFolderPath (0, shellcon.CSIDL_DESKTOP, 0, 0)
shortcut_path = os.path.join (desktop_path, "python.lnk")
persist_file = shortcut.QueryInterface (pythoncom.IID_IPersistFile)
persist_file.Load (shortcut_path)
shortcut.SetDescription ("Updated Python %s" % sys.version)
mydocs_path = shell.SHGetFolderPath (0, shellcon.CSIDL_PERSONAL, 0, 0)
shortcut.SetWorkingDirectory (mydocs_path)
persist_file.Save (shortcut_path, 0)
This is from http://timgolden.me.uk/python/win32_how_do_i/create-a-shortcut.html.
You can search for "python ishelllink" for other examples.
Also, the API reference helps too: http://msdn.microsoft.com/en-us/library/bb774950(VS.85).aspx
I also realize this question is old, but I found the answers to be most relevant to my situation.
Like Jared's answer, I also could not use the win32com module. So Jared's use of the binary structure from MS-SHLLINK got me part of the way there. I needed read shortcuts on both Windows and Linux, where the shortcuts are created on a samba share by Windows. Jared's implementation didn't quite work for me, I think only because I encountered some different variations on the shortcut format. But, it gave me the start I needed (thanks Jared).
So, here is a class named MSShortcut which expands on Jared's approach. However, the implementation is only Python3.4 and above, due to using some pathlib features added in Python3.4
#!/usr/bin/python3
# Link Format from MS: https://msdn.microsoft.com/en-us/library/dd871305.aspx
# Need to be able to read shortcut target from .lnk file on linux or windows.
# Original inspiration from: http://stackoverflow.com/questions/397125/reading-the-target-of-a-lnk-file-in-python
from pathlib import Path, PureWindowsPath
import struct, sys, warnings
if sys.hexversion < 0x03040000:
warnings.warn("'{}' module requires python3.4 version or above".format(__file__), ImportWarning)
# doc says class id =
# 00021401-0000-0000-C000-000000000046
# requiredCLSID = b'\x00\x02\x14\x01\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x00\x00\x46'
# Actually Getting:
requiredCLSID = b'\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x00\x00\x46' # puzzling
class ShortCutError(RuntimeError):
pass
class MSShortcut():
"""
interface to Microsoft Shortcut Objects. Purpose:
- I need to be able to get the target from a samba shared on a linux machine
- Also need to get access from a Windows machine.
- Need to support several forms of the shortcut, as they seem be created differently depending on the
creating machine.
- Included some 'flag' types in external interface to help test differences in shortcut types
Args:
scPath (str): path to shortcut
Limitations:
- There are some omitted object properties in the implementation.
Only implemented / tested enough to recover the shortcut target information. Recognized omissions:
- LinkTargetIDList
- VolumeId structure (if captured later, should be a separate class object to hold info)
- Only captured environment block from extra data
- I don't know how or when some of the shortcut information is used, only captured what I recognized,
so there may be bugs related to use of the information
- NO shortcut update support (though might be nice)
- Implementation requires python 3.4 or greater
- Tested only with Unicode data on a 64bit little endian machine, did not consider potential endian issues
Not Debugged:
- localBasePath - didn't check if parsed correctly or not.
- commonPathSuffix
- commonNetworkRelativeLink
"""
def __init__(self, scPath):
"""
Parse and keep shortcut properties on creation
"""
self.scPath = Path(scPath)
self._clsid = None
self._lnkFlags = None
self._lnkInfoFlags = None
self._localBasePath = None
self._commonPathSuffix = None
self._commonNetworkRelativeLink = None
self._name = None
self._relativePath = None
self._workingDir = None
self._commandLineArgs = None
self._iconLocation = None
self._envTarget = None
self._ParseLnkFile(self.scPath)
#property
def clsid(self):
return self._clsid
#property
def lnkFlags(self):
return self._lnkFlags
#property
def lnkInfoFlags(self):
return self._lnkInfoFlags
#property
def localBasePath(self):
return self._localBasePath
#property
def commonPathSuffix(self):
return self._commonPathSuffix
#property
def commonNetworkRelativeLink(self):
return self._commonNetworkRelativeLink
#property
def name(self):
return self._name
#property
def relativePath(self):
return self._relativePath
#property
def workingDir(self):
return self._workingDir
#property
def commandLineArgs(self):
return self._commandLineArgs
#property
def iconLocation(self):
return self._iconLocation
#property
def envTarget(self):
return self._envTarget
#property
def targetPath(self):
"""
Args:
woAnchor (bool): remove the anchor (\\server\path or drive:) from returned path.
Returns:
a libpath PureWindowsPath object for combined workingDir/relative path
or the envTarget
Raises:
ShortCutError when no target path found in Shortcut
"""
target = None
if self.workingDir:
target = PureWindowsPath(self.workingDir)
if self.relativePath:
target = target / PureWindowsPath(self.relativePath)
else: target = None
if not target and self.envTarget:
target = PureWindowsPath(self.envTarget)
if not target:
raise ShortCutError("Unable to retrieve target path from MS Shortcut: shortcut = {}"
.format(str(self.scPath)))
return target
#property
def targetPathWOAnchor(self):
tp = self.targetPath
return tp.relative_to(tp.anchor)
def _ParseLnkFile(self, lnkPath):
with lnkPath.open('rb') as f:
content = f.read()
# verify size (4 bytes)
hdrSize = struct.unpack('I', content[0x00:0x04])[0]
if hdrSize != 0x4C:
raise ShortCutError("MS Shortcut HeaderSize = {}, but required to be = {}: shortcut = {}"
.format(hdrSize, 0x4C, str(lnkPath)))
# verify LinkCLSID id (16 bytes)
self._clsid = bytes(struct.unpack('B'*16, content[0x04:0x14]))
if self._clsid != requiredCLSID:
raise ShortCutError("MS Shortcut ClassID = {}, but required to be = {}: shortcut = {}"
.format(self._clsid, requiredCLSID, str(lnkPath)))
# read the LinkFlags structure (4 bytes)
self._lnkFlags = struct.unpack('I', content[0x14:0x18])[0]
#logger.debug("lnkFlags=0x%0.8x" % self._lnkFlags)
position = 0x4C
# if HasLinkTargetIDList bit, then position to skip the stored IDList structure and header
if (self._lnkFlags & 0x01):
idListSize = struct.unpack('H', content[position:position+0x02])[0]
position += idListSize + 2
# if HasLinkInfo, then process the linkinfo structure
if (self._lnkFlags & 0x02):
(linkInfoSize, linkInfoHdrSize, self._linkInfoFlags,
volIdOffset, localBasePathOffset,
cmnNetRelativeLinkOffset, cmnPathSuffixOffset) = struct.unpack('IIIIIII', content[position:position+28])
# check for optional offsets
localBasePathOffsetUnicode = None
cmnPathSuffixOffsetUnicode = None
if linkInfoHdrSize >= 0x24:
(localBasePathOffsetUnicode, cmnPathSuffixOffsetUnicode) = struct.unpack('II', content[position+28:position+36])
#logger.debug("0x%0.8X" % linkInfoSize)
#logger.debug("0x%0.8X" % linkInfoHdrSize)
#logger.debug("0x%0.8X" % self._linkInfoFlags)
#logger.debug("0x%0.8X" % volIdOffset)
#logger.debug("0x%0.8X" % localBasePathOffset)
#logger.debug("0x%0.8X" % cmnNetRelativeLinkOffset)
#logger.debug("0x%0.8X" % cmnPathSuffixOffset)
#logger.debug("0x%0.8X" % localBasePathOffsetUnicode)
#logger.debug("0x%0.8X" % cmnPathSuffixOffsetUnicode)
# if info has a localBasePath
if (self._linkInfoFlags & 0x01):
bpPosition = position + localBasePathOffset
# not debugged - don't know if this works or not
self._localBasePath = UnpackZ('z', content[bpPosition:])[0].decode('ascii')
#logger.debug("localBasePath: {}".format(self._localBasePath))
if localBasePathOffsetUnicode:
bpPosition = position + localBasePathOffsetUnicode
self._localBasePath = UnpackUnicodeZ('z', content[bpPosition:])[0]
self._localBasePath = self._localBasePath.decode('utf-16')
#logger.debug("localBasePathUnicode: {}".format(self._localBasePath))
# get common Path Suffix
cmnSuffixPosition = position + cmnPathSuffixOffset
self._commonPathSuffix = UnpackZ('z', content[cmnSuffixPosition:])[0].decode('ascii')
#logger.debug("commonPathSuffix: {}".format(self._commonPathSuffix))
if cmnPathSuffixOffsetUnicode:
cmnSuffixPosition = position + cmnPathSuffixOffsetUnicode
self._commonPathSuffix = UnpackUnicodeZ('z', content[cmnSuffixPosition:])[0]
self._commonPathSuffix = self._commonPathSuffix.decode('utf-16')
#logger.debug("commonPathSuffix: {}".format(self._commonPathSuffix))
# check for CommonNetworkRelativeLink
if (self._linkInfoFlags & 0x02):
relPosition = position + cmnNetRelativeLinkOffset
self._commonNetworkRelativeLink = CommonNetworkRelativeLink(content, relPosition)
position += linkInfoSize
# If HasName
if (self._lnkFlags & 0x04):
(position, self._name) = self.readStringObj(content, position)
#logger.debug("name: {}".format(self._name))
# get relative path string
if (self._lnkFlags & 0x08):
(position, self._relativePath) = self.readStringObj(content, position)
#logger.debug("relPath='{}'".format(self._relativePath))
# get working dir string
if (self._lnkFlags & 0x10):
(position, self._workingDir) = self.readStringObj(content, position)
#logger.debug("workingDir='{}'".format(self._workingDir))
# get command line arguments
if (self._lnkFlags & 0x20):
(position, self._commandLineArgs) = self.readStringObj(content, position)
#logger.debug("commandLineArgs='{}'".format(self._commandLineArgs))
# get icon location
if (self._lnkFlags & 0x40):
(position, self._iconLocation) = self.readStringObj(content, position)
#logger.debug("iconLocation='{}'".format(self._iconLocation))
# look for environment properties
if (self._lnkFlags & 0x200):
while True:
size = struct.unpack('I', content[position:position+4])[0]
#logger.debug("blksize=%d" % size)
if size==0: break
signature = struct.unpack('I', content[position+4:position+8])[0]
#logger.debug("signature=0x%0.8x" % signature)
# EnvironmentVariableDataBlock
if signature == 0xA0000001:
if (self._lnkFlags & 0x80): # unicode
self._envTarget = UnpackUnicodeZ('z', content[position+268:])[0]
self._envTarget = self._envTarget.decode('utf-16')
else:
self._envTarget = UnpackZ('z', content[position+8:])[0].decode('ascii')
#logger.debug("envTarget='{}'".format(self._envTarget))
position += size
def readStringObj(self, scContent, position):
"""
returns:
tuple: (newPosition, string)
"""
strg = ''
size = struct.unpack('H', scContent[position:position+2])[0]
#logger.debug("workingDirSize={}".format(size))
if (self._lnkFlags & 0x80): # unicode
size *= 2
strg = struct.unpack(str(size)+'s', scContent[position+2:position+2+size])[0]
strg = strg.decode('utf-16')
else:
strg = struct.unpack(str(size)+'s', scContent[position+2:position+2+size])[0].decode('ascii')
#logger.debug("strg='{}'".format(strg))
position += size + 2 # 2 bytes to account for CountCharacters field
return (position, strg)
class CommonNetworkRelativeLink():
def __init__(self, scContent, linkContentPos):
self._networkProviderType = None
self._deviceName = None
self._netName = None
(linkSize, flags, netNameOffset,
devNameOffset, self._networkProviderType) = struct.unpack('IIIII', scContent[linkContentPos:linkContentPos+20])
#logger.debug("netnameOffset = {}".format(netNameOffset))
if netNameOffset > 0x014:
(netNameOffsetUnicode, devNameOffsetUnicode) = struct.unpack('II', scContent[linkContentPos+20:linkContentPos+28])
#logger.debug("netnameOffsetUnicode = {}".format(netNameOffsetUnicode))
self._netName = UnpackUnicodeZ('z', scContent[linkContentPos+netNameOffsetUnicode:])[0]
self._netName = self._netName.decode('utf-16')
self._deviceName = UnpackUnicodeZ('z', scContent[linkContentPos+devNameOffsetUnicode:])[0]
self._deviceName = self._deviceName.decode('utf-16')
else:
self._netName = UnpackZ('z', scContent[linkContentPos+netNameOffset:])[0].decode('ascii')
self._deviceName = UnpackZ('z', scContent[linkContentPos+devNameOffset:])[0].decode('ascii')
#property
def deviceName(self):
return self._deviceName
#property
def netName(self):
return self._netName
#property
def networkProviderType(self):
return self._networkProviderType
def UnpackZ (fmt, buf) :
"""
Unpack Null Terminated String
"""
#logger.debug(bytes(buf))
while True :
pos = fmt.find ('z')
if pos < 0 :
break
z_start = struct.calcsize (fmt[:pos])
z_len = buf[z_start:].find(b'\0')
#logger.debug(z_len)
fmt = '%s%dsx%s' % (fmt[:pos], z_len, fmt[pos+1:])
#logger.debug("fmt='{}', len={}".format(fmt, z_len))
fmtlen = struct.calcsize(fmt)
return struct.unpack (fmt, buf[0:fmtlen])
def UnpackUnicodeZ (fmt, buf) :
"""
Unpack Null Terminated String
"""
#logger.debug(bytes(buf))
while True :
pos = fmt.find ('z')
if pos < 0 :
break
z_start = struct.calcsize (fmt[:pos])
# look for null bytes by pairs
z_len = 0
for i in range(z_start,len(buf),2):
if buf[i:i+2] == b'\0\0':
z_len = i-z_start
break
fmt = '%s%dsxx%s' % (fmt[:pos], z_len, fmt[pos+1:])
# logger.debug("fmt='{}', len={}".format(fmt, z_len))
fmtlen = struct.calcsize(fmt)
return struct.unpack (fmt, buf[0:fmtlen])
I hope this helps others as well.
Thanks
I didn't really like any of the answers available because I didn't want to keep importing more and more libraries and the 'shell' option was spotty on my test machines. I opted for reading the ".lnk" in and then using a regular expression to read out the path. For my purposes, I am looking for pdf files that were recently opened and then reading the content of those files:
# Example file path to the shortcut
shortcut = "shortcutFileName.lnk"
# Open the lnk file using the ISO-8859-1 encoder to avoid errors for special characters
lnkFile = open(shortcut, 'r', encoding = "ISO-8859-1")
# Use a regular expression to parse out the pdf file on C:\
filePath = re.findall("C:.*?pdf", lnkFile.read(), flags=re.DOTALL)
# Close File
lnkFile.close()
# Read the pdf at the lnk Target
pdfFile = open(tmpFilePath[0], 'rb')
Comments:
Obviously this works for pdf but needs to specify other file extensions accordingly.
It's easy as opening ".exe" file. Here also, we are going to use the os module for this. You just have to create a shortcut .lnk and store it in any folder of your choice. Then, in any Python file, first import the os module (already installed, just import). Then, use a variable, say path, and assign it a string value containing the location of your .lnk file. Just create a shortcut of your desired application. At last, we will use os.startfile()
to open our shortcut.
Points to remember:
The location should be within double inverted commas.
Most important, open Properties. Then, under that, open "Details". There, you can get the exact name of your shortcut. Please write that name with ".lnk" at last.
Now, you have completed the procedure. I hope it helps you. For additional assistance, I am leaving my code for this at the bottom.
import os
path = "C:\\Users\\hello\\OneDrive\\Desktop\\Shortcuts\\OneNote for Windows 10.lnk"
os.startfile(path)
In my code, I used path as variable and I had created a shortcut for OneNote. In path, I defined the location of OneNote's shortcut. So when I use os.startfile(path), the os module is going to open my shortcut file defined in variable path.
this job is possible without any modules, doing this will return a b string having the destination of the shortcut file. Basically what you do is you open the file in read binary mode (rb mode). This is the code to accomplish this task:
with open('programs.lnk - Copy','rb') as f:
destination=f.read()
i am currently using python 3.9.2, in case you face problems with this, just tell me and i will try to fix it.
A more stable solution in python, using powershell to read the target path from the .lnk file.
using only standard libraries avoids introducing extra dependencies such as win32com
this approach works with the .lnks that failed with jared's answer, more details
we avoid directly reading the file, which felt hacky, and sometimes failed
import subprocess
def get_target(link_path) -> (str, str):
"""
Get the target & args of a Windows shortcut (.lnk)
:param link_path: The Path or string-path to the shortcut, e.g. "C:\\Users\\Public\\Desktop\\My Shortcut.lnk"
:return: A tuple of the target and arguments, e.g. ("C:\\Program Files\\My Program.exe", "--my-arg")
"""
# get_target implementation by hannes, https://gist.github.com/Winand/997ed38269e899eb561991a0c663fa49
ps_command = \
"$WSShell = New-Object -ComObject Wscript.Shell;" \
"$Shortcut = $WSShell.CreateShortcut(\"" + str(link_path) + "\"); " \
"Write-Host $Shortcut.TargetPath ';' $shortcut.Arguments "
output = subprocess.run(["powershell.exe", ps_command], capture_output=True)
raw = output.stdout.decode('utf-8')
launch_path, args = [x.strip() for x in raw.split(';', 1)]
return launch_path, args
# to test
shortcut_file = r"C:\Users\REPLACE_WITH_USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Accessibility\Narrator.lnk"
a, args = get_target(shortcut_file)
print(a) # C:\WINDOWS\system32\narrator.exe
(you can remove -> typehinting to get it to work in older python versions)
I did notice this is slow when running on lots of shortcuts. You could use jareds method, check if the result is None, and if so, run this code to get the target path.
The nice approach with direct regex-based parsing (proposed in the answer) didn't work reliable for all shortcuts in my case. Some of them have only relative path like ..\\..\\..\\..\\..\\..\\Program Files\\ImageGlass\\ImageGlass.exe (produced by msi-installer), and it is stored with wide chars, which are tricky to handle in Python.
So I've discovered a Python module LnkParse3, which is easy to use and meets my needs.
Here is a sample script to show target of a lnk-file passed as first argument:
import LnkParse3
import sys
with open(sys.argv[1], 'rb') as indata:
lnk = LnkParse3.lnk_file(indata)
print(lnk.lnk_command)
I arrived at this thread looking for a way to parse a ".lnk" file and get the target file name.
I found another very simple solution:
pip install comtypes
Then
from comtypes.client import CreateObject
from comtypes.persist import IPersistFile
from comtypes.shelllink import ShellLink
# MAKE SURE THIS VAT CONTAINS A STRING AND NOT AN OBJECT OF 'PATH'
# I spent too much time figuring out the problem with .load(..) function ahead
pathStr="c:\folder\yourlink.lnk"
s = CreateObject(ShellLink)
p = s.QueryInterface(IPersistFile)
p.Load(pathStr, False)
print(s.GetPath())
print(s.GetArguments())
print(s.GetWorkingDirectory())
print(s.GetIconLocation())
try:
# the GetDescription create exception in some of the links
print(s.GetDescription())
except Exception as e:
print(e)
print(s.Hotkey)
print(s.ShowCmd)
Based on this great answer...
https://stackoverflow.com/a/43856809/2992810