How to Read-in WPF extension in Python? - python

1I am trying to read-in some old text document file using python.
This file written in 1995 and has a file extension of ".WPF"
I had tried
f = open('/Users/zachary/Downloads/2R.WPF', mode = 'r')
print(f.read())
If I open it up through libreoffice, it well appears.
Any hint how to process text in .WPF using python?
linke address:
WTO Dispute Settlment DS2 Panel Report
Someone had marked it as the duplicated under the notion that the file is just wrongly named in WPF, however, it looks it's not a .doc file since the textract.process returns the error "it's not .doc"

As can be determined from the very first bytes, that file is a WordPerfect 5.x file (where x is 0, 1, or possibly 2), a file format dating back to around 1989.
According to its description, the Tika interface for Python should be able to convert this for you, but as far as word processor formats go, these older WordPerfect files are fairly easy to decode, without anything more than a plain Python installation.
The format consists of a large header (which, among other information, defines the printer that the document was formatted for, the list of fonts used, and some basic "style" information – I chose to skip it entirely in my program below), followed by plain text which is interspersed with binary codes which govern the formatting.
The binary codes appear in 3 distinct forms:
single-byte: 0x0A is a Return, 0xA9 is a breaking hyphen, 0xAA is a breaking hyphen when the line is broken at that position, and so on.
multi-byte, fixed length: the byte is followed by one or more specifications. For example, 0xC0 is a "special character code". It is followed by the character set index and the index of the actual character inside that set. The final byte of a fixed-length code is always the starting byte again.
multi-byte, variable length: the code determines a main category of formatting and is followed by a second to indicate a subcategory; after that, 2 bytes in little-endian indicate the length of the following data (excluding the first 4 bytes). This code always ends with the same items in reversed order: 2 bytes (little-endian) for the length, the subcategory, then the main category.
Codes between 0x00..0x1F and 0x7F..0xBF are single-byte control codes (not all are used). Codes between 0xC0..0xCF are fixed-length control codes, with various predefined lengths. Codes from 0xD0 onward are always variable-length codes.
With only this information, it's already possible to extract the plain text of this document, and just skip all possible formatting. Comparing the output codes against the PDF from the same site reveals the meaning of some of the codes, such as the various types of Return, the Tab, and plain text formatting such as bold and italics.
In addition, footnotes are stored in-line inside a variable-length code, so this needs some form of re-entrant parser.
The following Python 3 program is a basic framework which you can use as-is (it extracts the text, with a hint for the footnotes), or you can enable the commented-out lines at the bottom and find further information on parsing more of the formatting code.
# -*- coding: utf-8 -*-
import sys
WPType_None = 0
WPType_Text = 1
WPType_Byte = 2
WPType_Fixed = 3
WPType_Variable = 4
plain_remap = {10:'\n', 11:' ', 13:' ', 160:' ', 169:'-', 170:'-'}
WpCharacterSet = { 0x0121:'à', 0x0406:'§', 0x041c:u'’', 0x041d:u'‘', 0x041f:'”', 0x0420:'“', 0x0422:'—' }
textAttributes = [
"Extra Large",
"Very Large",
"Large",
"Small",
"Fine",
"Superscript",
"Subscript",
"Outline",
"Italic",
"Shadow",
"Redline",
"Double Underline",
"Bold",
"Strikeout",
"Underline",
"SmallCaps" ]
class WPElem:
def __init__(self, type=WPType_None, data = [], code=None):
self.type = type
self.code = code
if type == WPType_Text:
self.data = data
else:
self.data = data
class WordPerfect:
def __init__(self, filename):
with open(filename, "rb") as file:
self.data = bytearray(file.read())
sig = ''.join(chr(x) for x in self.data[1:4])
if self.data[0] != 255 or sig != 'WPC':
raise TypeError('Invalid file type')
self.data_start = self.data[4]+256*(self.data[5]+256*(self.data[6]+256*self.data[7]))
self.length = len(self.data)
self.elements = []
self.parse (self.data_start, self.length)
def parse (self, start,maxlength):
pos = start
while pos < maxlength:
byte = self.data[pos]
if byte in plain_remap:
byte = ord(plain_remap[byte])
if byte == 10 or byte >= 32 and byte <= 126:
if len(self.elements) == 0 or self.elements[-1].type != WPType_Text:
self.elements.append(WPElem(WPType_Text, ''))
self.elements[-1].data += chr(byte)
pos += 1
elif byte == 12:
self.elements.append(WPElem(WPType_Text, '\n\n'))
pos += 1
elif byte == 0x8c: # [HRt/Pg Break]
self.elements.append(WPElem(WPType_Text, '\n'))
pos += 1
elif byte == 0x8d: # [Ftn Num]
self.elements.append(WPElem(WPType_Text, '[Ftn Num]'))
pos += 1
elif byte == 0x99: # [HRt/Top of Pg]
self.elements.append(WPElem(WPType_Text, '\n'))
pos += 1
elif byte == 0xc0 and pos+3 < maxlength and self.data[pos+3] == 0xc0:
wpchar = self.data[pos+1]+256*self.data[pos+2]
if wpchar in WpCharacterSet:
self.elements.append(WPElem(WPType_Text, WpCharacterSet[wpchar]))
else:
self.elements.append(WPElem(WPType_Text, '{CHAR:%04X}' % wpchar))
pos += 4
elif byte == 0xc1 and self.data[pos+8] == 0xc1:
# self.elements.append(WPElem(WPType_Fixed, self.data[pos:pos+7]))
self.elements.append(WPElem(WPType_Text, '\t'))
pos += 9
elif byte == 0xc2 and self.data[pos+10] == 0xc2:
# self.elements.append(WPElem(WPType_Fixed, self.data[pos:pos+9]))
self.elements.append(WPElem(WPType_Text, '\t'))
pos += 11
elif byte == 0xc3:
self.elements.append(WPElem(WPType_Fixed, self.data[pos:pos+1], '%s On' % textAttributes[self.data[pos+1]]))
pos += 3
elif byte == 0xc4:
self.elements.append(WPElem(WPType_Fixed, self.data[pos:pos+1], '%s Off' % textAttributes[self.data[pos+1]]))
pos += 3
elif byte == 0xc6:
self.elements.append(WPElem(WPType_Fixed, self.data[pos:pos+5]))
pos += 6
elif byte == 0xd6 and self.data[pos+1] == 0: # Footnote
self.elements.append(WPElem(WPType_Text, '[Footnote:'))
length = self.data[pos+2]+256*self.data[pos+3]
self.parse (pos+0x13, pos+length)
pos += 4+length
self.elements.append(WPElem(WPType_Text, ']'))
else:
self.elements.append(WPElem(WPType_Byte, [byte]))
if byte >= 0xd0 and pos+4 <= maxlength:
length = self.data[pos+2]+256*self.data[pos+3]
if pos+4+length <= self.length:
if pos+4+length <= self.length and self.data[pos+4+length-1] == byte:
self.elements[-1].type = WPType_Variable
self.elements[-1].data += [x for x in self.data[pos+1:pos+length]]
pos += 4+length
else:
pos += 1
else:
pos += 1
else:
pos += 1
if len(sys.argv) != 2:
print("usage: read_wpf.py [suitably ancient WordPerfect file]")
sys.exit(1)
wpdata = WordPerfect (sys.argv[1])
for i in wpdata.elements:
if i.type == WPType_Text:
print (i.data, end='')
'''
elif i.code:
print ('[%s]' % i.code, end='')
elif i.type == WPType_Variable:
print ('[%02X:%d]' % (i.data[0],i.data[1]), end='')
else:
print ('[%02X]' % i.data[0], end='')
'''
Running it prints out the text to the console:
$ python3 read_wpf.py 2R.WPF
RESTRICTED
World Trade WT/DS2/R
29 January 1996
Organization
(96-0326)
(.. several thousands of lines omitted for brevity ..)
8.2 The Panel recommends that the Dispute Settlement Body request the
United States to bring this part of the Gasoline Rule into conformity
with its obligations under the General Agreement.
and you can either rewrite the program to store this into a plain text file, or redirect via your console into a file.
I've only added a translation for the handful of special characters that appear in the sample file. For a full-featured version, you'd need to look up a 90s data sheet somewhere, and provide Unicode translations for each of the thousands of characters.
Similarly, I've only 'parsed' some of the special formatting codes, and to a very limited extend. If you need to be able to extract particular formatting – say, tab settings, margins, font sizes, et cetera –, you must locate a full specification of the file format and add specific parsing code for these functions.

Related

Python 3.6 script surprisingly slow on Windows 10 but not on Ubuntu 17.10

I recently had to write a challenge for a company that was to merge 3 CSV files into one based on the first attribute of each (the attributes were repeating in all files).
I wrote the code and sent it to them, but they said it took 2 minutes to run. That was funny because it ran for 10 seconds on my machine. My machine had the same processor, 16GB of RAM, and had an SSD as well. Very similar environments.
I tried optimising it and resubmitted it. This time they said they ran it on an Ubuntu machine and got 11 seconds, while the code ran for 100 seconds on the Windows 10 still.
Another peculiar thing was that when I tried profiling it with the Profile module, it went on forever, had to terminate after 450 seconds. I moved to cProfiler and it recorded it for 7 seconds.
EDIT: The exact formulation of the problem is
Write a console program to merge the files provided in a timely and
efficient manner. File paths should be supplied as arguments so that
the program can be evaluated on different data sets. The merged file
should be saved as CSV; use the id column as the unique key for
merging; the program should do any necessary data cleaning and error
checking.
Feel free to use any language you’re comfortable with – only
restriction is no external libraries as this defeats the purpose of
the test. If the language provides CSV parsing libraries (like
Python), please avoid using them as well as this is a part of the
test.
Without further ado here's the code:
#!/usr/bin/python3
import sys
from multiprocessing import Pool
HEADERS = ['id']
def csv_tuple_quotes_valid(a_tuple):
"""
checks if a quotes in each attribute of a entry (i.e. a tuple) agree with the csv format
returns True or False
"""
for attribute in a_tuple:
in_quotes = False
attr_len = len(attribute)
skip_next = False
for i in range(0, attr_len):
if not skip_next and attribute[i] == '\"':
if i < attr_len - 1 and attribute[i + 1] == '\"':
skip_next = True
continue
elif i == 0 or i == attr_len - 1:
in_quotes = not in_quotes
else:
return False
else:
skip_next = False
if in_quotes:
return False
return True
def check_and_parse_potential_tuple(to_parse):
"""
receives a string and returns an array of the attributes of the csv line
if the string was not a valid csv line, then returns False
"""
a_tuple = []
attribute_start_index = 0
to_parse_len = len(to_parse)
in_quotes = False
i = 0
#iterate through the string (line from the csv)
while i < to_parse_len:
current_char = to_parse[i]
#this works the following way: if we meet a quote ("), it must be in one
#of five cases: "" | ", | ," | "\0 | (start_of_string)"
#in case we are inside a quoted attribute (i.e. "123"), then commas are ignored
#the following code also extracts the tuples' attributes
if current_char == '\"':
if i == 0 or (to_parse[i - 1] == ',' and not in_quotes): # (start_of_string)" and ," case
#not including the quote in the next attr
attribute_start_index = i + 1
#starting a quoted attr
in_quotes = True
elif i + 1 < to_parse_len:
if to_parse[i + 1] == '\"': # "" case
i += 1 #skip the next " because it is part of a ""
elif to_parse[i + 1] == ',' and in_quotes: # ", case
a_tuple.append(to_parse[attribute_start_index:i].strip())
#not including the quote and comma in the next attr
attribute_start_index = i + 2
in_quotes = False #the quoted attr has ended
#skip the next comma - we know what it is for
i += 1
else:
#since we cannot have a random " in the middle of an attr
return False
elif i == to_parse_len - 1: # "\0 case
a_tuple.append(to_parse[attribute_start_index:i].strip())
#reached end of line, so no more attr's to extract
attribute_start_index = to_parse_len
in_quotes = False
else:
return False
elif current_char == ',':
if not in_quotes:
a_tuple.append(to_parse[attribute_start_index:i].strip())
attribute_start_index = i + 1
i += 1
#in case the last attr was left empty or unquoted
if attribute_start_index < to_parse_len or (not in_quotes and to_parse[-1] == ','):
a_tuple.append(to_parse[attribute_start_index:])
#line ended while parsing; i.e. a quote was openned but not closed
if in_quotes:
return False
return a_tuple
def parse_tuple(to_parse, no_of_headers):
"""
parses a string and returns an array with no_of_headers number of headers
raises an error if the string was not a valid CSV line
"""
#get rid of the newline at the end of every line
to_parse = to_parse.strip()
# return to_parse.split(',') #if we assume the data is in a valid format
#the following checking of the format of the data increases the execution
#time by a factor of 2; if the data is know to be valid, uncomment 3 lines above here
#if there are more commas than fields, then we must take into consideration
#how the quotes parse and then extract the attributes
if to_parse.count(',') + 1 > no_of_headers:
result = check_and_parse_potential_tuple(to_parse)
if result:
a_tuple = result
else:
raise TypeError('Error while parsing CSV line %s. The quotes do not parse' % to_parse)
else:
a_tuple = to_parse.split(',')
if not csv_tuple_quotes_valid(a_tuple):
raise TypeError('Error while parsing CSV line %s. The quotes do not parse' % to_parse)
#if the format is correct but more data fields were provided
#the following works faster than an if statement that checks the length of a_tuple
try:
a_tuple[no_of_headers - 1]
except IndexError:
raise TypeError('Error while parsing CSV line %s. Unknown reason' % to_parse)
#this replaces the use my own hashtables to store the duplicated values for the attributes
for i in range(1, no_of_headers):
a_tuple[i] = sys.intern(a_tuple[i])
return a_tuple
def read_file(path, file_number):
"""
reads the csv file and returns (dict, int)
the dict is the mapping of id's to attributes
the integer is the number of attributes (headers) for the csv file
"""
global HEADERS
try:
file = open(path, 'r');
except FileNotFoundError as e:
print("error in %s:\n%s\nexiting...")
exit(1)
main_table = {}
headers = file.readline().strip().split(',')
no_of_headers = len(headers)
HEADERS.extend(headers[1:]) #keep the headers from the file
lines = file.readlines()
file.close()
args = []
for line in lines:
args.append((line, no_of_headers))
#pool is a pool of worker processes parsing the lines in parallel
with Pool() as workers:
try:
all_tuples = workers.starmap(parse_tuple, args, 1000)
except TypeError as e:
print('Error in file %s:\n%s\nexiting thread...' % (path, e.args))
exit(1)
for a_tuple in all_tuples:
#add quotes to key if needed
key = a_tuple[0] if a_tuple[0][0] == '\"' else ('\"%s\"' % a_tuple[0])
main_table[key] = a_tuple[1:]
return (main_table, no_of_headers)
def merge_files():
"""
produces a file called merged.csv
"""
global HEADERS
no_of_files = len(sys.argv) - 1
processed_files = [None] * no_of_files
for i in range(0, no_of_files):
processed_files[i] = read_file(sys.argv[i + 1], i)
out_file = open('merged.csv', 'w+')
merged_str = ','.join(HEADERS)
all_keys = {}
#this is to ensure that we include all keys in the final file.
#even those that are missing from some files and present in others
for processed_file in processed_files:
all_keys.update(processed_file[0])
for key in all_keys:
merged_str += '\n%s' % key
for i in range(0, no_of_files):
(main_table, no_of_headers) = processed_files[i]
try:
for attr in main_table[key]:
merged_str += ',%s' % attr
except KeyError:
print('NOTE: no values found for id %s in file \"%s\"' % (key, sys.argv[i + 1]))
merged_str += ',' * (no_of_headers - 1)
out_file.write(merged_str)
out_file.close()
if __name__ == '__main__':
# merge_files()
import cProfile
cProfile.run('merge_files()')
# import time
# start = time.time()
# print(time.time() - start);
Here is the profiler report I got on my Windows.
EDIT: The rest of the csv data provided is here. Pastebin was taking too long to process the files, so...
It might not be the best code and I know that, but my question is what slows down Windows so much that doesn't slow down an Ubuntu? The merge_files() function takes the longest, with 94 seconds just for itself, not including the calls to other functions. And there doesn't seem to be anything too obvious to me for why it is so slow.
Thanks
EDIT: Note: We both used the same dataset to run the code with.
It turns out that Windows and Linux handle very long strings differently. When I moved the out_file.write(merged_str) inside the outer for loop (for key in all_keys:) and stopped appending to merged_str, it ran for 11 seconds as expected. I don't have enough knowledge on either of the OS's memory management systems to be able to give a prediction on why it is so different.
But I would say that the way that the second one (the Windows one) is the more fail-safe method because it is unreasonable to keep a 30 MB string in memory. It just turns out that Linux sees that and doesn't always try to keep the string in cache, or to rebuild it every time.
Funny enough, initially I did run it a few times on my Linux machine with these same writing strategies, and the one with the large string seemed to go faster, so I stuck with it. I guess you never know.
Here's the modified code
for key in all_keys:
merged_str = '%s' % key
for i in range(0, no_of_files):
(main_table, no_of_headers) = processed_files[i]
try:
for attr in main_table[key]:
merged_str += ',%s' % attr
except KeyError:
print('NOTE: no values found for id %s in file \"%s\"' % (key, sys.argv[i + 1]))
merged_str += ',' * (no_of_headers - 1)
out_file.write(merged_str + '\n')
out_file.close()
When I run your solution on Ubuntu 16.04 with the three given files, it seems to take ~8 seconds to complete. The only modification I made was to uncomment the timing code at the bottom and use it.
$ python3 dimitar_merge.py file1.csv file2.csv file3.csv
NOTE: no values found for id "aaa5d09b-684b-47d6-8829-3dbefd608b5e" in file "file2.csv"
NOTE: no values found for id "38f79a49-4357-4d5a-90a5-18052ef03882" in file "file2.csv"
NOTE: no values found for id "766590d9-4f5b-4745-885b-83894553394b" in file "file2.csv"
8.039648056030273
$ python3 dimitar_merge.py file1.csv file2.csv file3.csv
NOTE: no values found for id "38f79a49-4357-4d5a-90a5-18052ef03882" in file "file2.csv"
NOTE: no values found for id "766590d9-4f5b-4745-885b-83894553394b" in file "file2.csv"
NOTE: no values found for id "aaa5d09b-684b-47d6-8829-3dbefd608b5e" in file "file2.csv"
7.78482985496521
I rewrote my first attempt without using csv from the standard library and am now getting times of ~4.3 seconds.
$ python3 lettuce_merge.py file1.csv file2.csv file3.csv
4.332579612731934
$ python3 lettuce_merge.py file1.csv file2.csv file3.csv
4.305467367172241
$ python3 lettuce_merge.py file1.csv file2.csv file3.csv
4.27345871925354
This is my solution code (lettuce_merge.py):
from collections import defaultdict
def split_row(csv_row):
return [col.strip('"') for col in csv_row.rstrip().split(',')]
def merge_csv_files(files):
file_headers = []
merged_headers = []
for i, file in enumerate(files):
current_header = split_row(next(file))
unique_key, *current_header = current_header
if i == 0:
merged_headers.append(unique_key)
merged_headers.extend(current_header)
file_headers.append(current_header)
result = defaultdict(lambda: [''] * (len(merged_headers) - 1))
for file_header, file in zip(file_headers, files):
for line in file:
key, *values = split_row(line)
for col_name, col_value in zip(file_header, values):
result[key][merged_headers.index(col_name) - 1] = col_value
file.close()
quotes = '"{}"'.format
with open('lettuce_merged.csv', 'w') as f:
f.write(','.join(quotes(a) for a in merged_headers) + '\n')
for key, values in result.items():
f.write(','.join(quotes(b) for b in [key] + values) + '\n')
if __name__ == '__main__':
from argparse import ArgumentParser, FileType
from time import time
parser = ArgumentParser()
parser.add_argument('files', nargs='*', type=FileType('r'))
args = parser.parse_args()
start_time = time()
merge_csv_files(args.files)
print(time() - start_time)
I'm sure this code could be optimized even further but sometimes just seeing another way to solve a problem can help spark new ideas.

Python - encryption and decryption scripts produce occasional errors

I made a Python script to encrypt plaintext files using the symmetric-key algorithm described in this video. I then created a second script to decrypt the encrypted message. Here is the original text:
I came, I saw, I conquered.
Here is the text after being encrypted and decrypted:
I came, I saw, I conquerdd.
Almost perfect, except for a single letter. For longer texts, there will be multiple letters which are just off ie the numerical representation of the character which appears is one lower than the numerical representation of the original character. I have no idea why this is.
Here's how my scripts work. First, I generated a random sequence of digits -- my PAD -- and saved it in the text file "pad.txt". I won't show the code because it is so straightforward. I then saved the text which I want to be encrypted in "text.txt". Next, I run the encryption script, which encrypts the text and saves it in the file "encryptedText.txt":
#!/usr/bin/python3.4
import string
def getPad():
padString = open("pad.txt","r").read()
pad = padString.split(" ")
return pad
def encrypt(textToEncrypt,pad):
encryptedText = ""
possibleChars = string.printable[:98] # last two elements are not used bec
# ause they don't show up well on te
# xt files.
for i in range(len(textToEncrypt)):
char = textToEncrypt[i]
if char in possibleChars:
num = possibleChars.index(char)
else:
return False
encryptedNum = num + int(pad[(i)%len(pad)])
if encryptedNum >= len(possibleChars):
encryptedNum = encryptedNum - len(possibleChars)
encryptedChar = possibleChars[encryptedNum]
encryptedText = encryptedText + encryptedChar
return encryptedText
if __name__ == "__main__":
textToEncrypt = open("text.txt","r").read()
pad = getPad()
encryptedText = encrypt(textToEncrypt,pad)
if not encryptedText:
print("""An error occurred during the encryption process. Confirm that \
there are no forbidden symbols in your text.""")
else:
open("encryptedText.txt","w").write(encryptedText)
Finally, I decrypt the text with this script:
#!/usr/bin/python3.4
import string
def getPad():
padString = open("pad.txt","r").read()
pad = padString.split(" ")
return pad
def decrypt(textToDecrypt,pad):
trueText = ""
possibleChars = string.printable[:98]
for i in range(len(textToDecrypt)):
encryptedChar = textToDecrypt[i]
encryptedNum = possibleChars.index(encryptedChar)
trueNum = encryptedNum - int(pad[i%len(pad)])
if trueNum < 0:
trueNum = trueNum + len(possibleChars)
trueChar = possibleChars[trueNum]
trueText = trueText + trueChar
return trueText
if __name__ == "__main__":
pad = getPad()
textToDecrypt = open("encryptedText.txt","r").read()
trueText = decrypt(textToDecrypt,pad)
open("decryptedText.txt","w").write(trueText)
Both scripts seem very straightforward, and they obvious work almost perfectly. However, every once in a while there is an error and I cannot see why.
I found the solution to this problem. It turns out that every character that was not decrypted properly was encrypted to \r, which my text editor changed to a \n for whatever reason. Removing \r from the list of possible characters fixed the issue.

Python reading until null character from Telnet

I am telneting to my server, which answers to me with messages and at the end of each message is appended hex00 (null character) which cannot be read. I tried searching through and through, but can't seem to make it work, a simple example:
from telnetlib import Telnet
connection = Telnet('localhost', 5001)
connection.write('aa\n')
connection.read_eager()
This returns an output:
'Fail - Command aa not found.\n\r'
whereas there should be sth like:
'Fail - Command aa not found.\n\r\0'
Is there any way to get this end of string character? Can I get bytes as an output if the character is missed on purpose?
The 00 character is there:
I stumbled in this same problem when trying to get data from an RS232-TCP/IP Converter using telnet - the telnetlib would suppress every 0x00 from the message. As Fredrik Johansson well answered, it is the way telnetlib was implemented.
One solution would be to override the process_rawq() function from telnetlib's Telnet class that doesn't eat all the null characters:
import telnetlib
from telnetlib import IAC, DO, DONT, WILL, WONT, SE, NOOPT
def _process_rawq(self):
"""Alteração da implementação desta função necessária pois telnetlib suprime 0x00 e \021 dos dados lidos
"""
buf = ['', '']
try:
while self.rawq:
c = self.rawq_getchar()
if not self.iacseq:
# if c == theNULL:
# continue
# if c == "\021":
# continue
if c != IAC:
buf[self.sb] = buf[self.sb] + c
continue
else:
self.iacseq += c
elif len(self.iacseq) == 1:
# 'IAC: IAC CMD [OPTION only for WILL/WONT/DO/DONT]'
if c in (DO, DONT, WILL, WONT):
self.iacseq += c
continue
self.iacseq = ''
if c == IAC:
buf[self.sb] = buf[self.sb] + c
else:
if c == SB: # SB ... SE start.
self.sb = 1
self.sbdataq = ''
elif c == SE:
self.sb = 0
self.sbdataq = self.sbdataq + buf[1]
buf[1] = ''
if self.option_callback:
# Callback is supposed to look into
# the sbdataq
self.option_callback(self.sock, c, NOOPT)
else:
# We can't offer automatic processing of
# suboptions. Alas, we should not get any
# unless we did a WILL/DO before.
self.msg('IAC %d not recognized' % ord(c))
elif len(self.iacseq) == 2:
cmd = self.iacseq[1]
self.iacseq = ''
opt = c
if cmd in (DO, DONT):
self.msg('IAC %s %d',
cmd == DO and 'DO' or 'DONT', ord(opt))
if self.option_callback:
self.option_callback(self.sock, cmd, opt)
else:
self.sock.sendall(IAC + WONT + opt)
elif cmd in (WILL, WONT):
self.msg('IAC %s %d',
cmd == WILL and 'WILL' or 'WONT', ord(opt))
if self.option_callback:
self.option_callback(self.sock, cmd, opt)
else:
self.sock.sendall(IAC + DONT + opt)
except EOFError: # raised by self.rawq_getchar()
self.iacseq = '' # Reset on EOF
self.sb = 0
pass
self.cookedq = self.cookedq + buf[0]
self.sbdataq = self.sbdataq + buf[1]
telnetlib.Telnet.process_rawq = _process_rawq
then override the Telnet class' method:
telnetlib.Telnet.process_rawq = _process_rawq
This solved the problem for me.
This code (http://www.opensource.apple.com/source/python/python-3/python/Lib/telnetlib.py) seems to just ignore null characters. Is that really correct behavior?
def process_rawq(self):
"""Transfer from raw queue to cooked queue.
Set self.eof when connection is closed. Don't block unless in
the midst of an IAC sequence.
"""
buf = ''
try:
while self.rawq:
c = self.rawq_getchar()
if c == theNULL:
continue
:
:
process_rawq is then in turn called by e.g. read_until
def read_until(self, match, timeout=None):
"""Read until a given string is encountered or until timeout.
When no match is found, return whatever is available instead,
possibly the empty string. Raise EOFError if the connection
is closed and no cooked data is available.
"""
n = len(match)
self.process_rawq()
:
:
I also want to receive the null character. In my particular case it marks the end of a multiline message.
So the answer seems to be that this is expected behavior as the library code is written.
FWIW https://support.microsoft.com/en-us/kb/231866 states:
Communication is established using TCP/IP and is based on a Network
Virtual Terminal (NVT). On the client, the Telnet program is
responsible for translating incoming NVT codes to codes understood by
the client's display device as well as for translating
client-generated keyboard codes into outgoing NVT codes.
The NVT uses 7-bit codes for characters. The display device, referred
to as a printer in the RFC, is only required to display the standard
printing ASCII characters represented by 7-bit codes and to recognize
and process certain control codes. The 7-bit characters are
transmitted as 8-bit bytes with the most significant bit set to zero.
An end-of-line is transmitted as a carriage return (CR) followed by a
line feed (LF). If you want to transmit an actual carriage return,
this is transmitted as a carriage return followed by a NUL (all bits
zero) character.
and
Name Code Decimal Value
Function NULL NUL 0 No operation

How to programmatically calculate Chrome extension ID?

I'm building an automated process to produce extensions. Is there a code example of calculating the extension-ID directly and entirely bypassing interaction with the browser?
(I'm answering my own question, below.)
I was only able to find a related article with a Ruby fragment, and it's only available in the IA: http://web.archive.org/web/20120606044635/http://supercollider.dk/2010/01/calculating-chrome-extension-id-from-your-private-key-233
Important to know:
This depends on a DER-encoded public key (raw binary), not a PEM-encoded key (nice ASCII generated by base64-encoding the DER key).
The extension-IDs are base-16, but are encoded using [a-p] (called "mpdecimal"), rather than [0-9a-f].
Using a PEM-encoded public key, follow the following steps:
If your PEM-formatted public-key still has the header and footer and is split into multiple lines, reformat it by hand so that you have a single string of characters that excludes the header and footer, and runs together such that every line of the key wraps to the next.
Base64-decode the public key to render a DER-formatted public-key.
Generate a SHA256 hex-digest of the DER-formatted key.
Take the first 32-bytes of the hash. You will not need the rest.
For each character, convert it to base-10, and add the ASCII code for 'a'.
The following is a Python routine to do this:
import hashlib
from base64 import b64decode
def build_id(pub_key_pem):
pub_key_der = b64decode(pub_key_pem)
sha = hashlib.sha256(pub_key_der).hexdigest()
prefix = sha[:32]
reencoded = ""
ord_a = ord('a')
for old_char in prefix:
code = int(old_char, 16)
new_char = chr(ord_a + code)
reencoded += new_char
return reencoded
def main():
pub_key = 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCjvF5pjuK8gRaw/2LoRYi37QqRd48B/FeO9yFtT6ueY84z/u0NrJ/xbPFc9OCGBi8RKIblVvcbY0ySGqdmp0QsUr/oXN0b06GL4iB8rMhlO082HhMzrClV8OKRJ+eJNhNBl8viwmtJs3MN0x9ljA4HQLaAPBA9a14IUKLjP0pWuwIDAQAB'
id_ = build_id(pub_key)
print(id_)
if __name__ == '__main__':
main()
You're more than welcome to test this against an existing extension and its ID. To retrieve its PEM-formatted public-key:
Go into the list of your existing extensions in Chrome. Grab the extension-ID of one.
Find the directory where the extension is hosted. On my Windows 7 box, it is: C:\Users<username>\AppData\Local\Google\Chrome\User Data\Default\Extensions<extension ID>
Grab the public-key from the manifest.json file under "key". Since the key is already ready to be base64-decoded, you can skip step (1) of the process.
The public-key in the example is from the "Chrome Reader" extension. Its extension ID is "lojpenhmoajbiciapkjkiekmobleogjc".
See also:
Google Chrome - Alphanumeric hashes to identify extensions
http://blog.roomanna.com/12-14-2010/getting-an-extensions-id
Starting with Chrome 64, Chrome changed the package format for extensions to the CRX₃ file format, which supports multiple signatures and explicitly declares its CRX ID. Extracting the CRX ID from a CRX₃ file requires parsing a protocol buffer.
Here is a small python script for extracting the ID from a CRX₃ file.
This solution should only be used with trusted CRX₃ files or in contexts where security is not a concern: unlike CRX₂, the package format does not restrict what CRX ID a CRX₃ file declares. (In practice, consumers of the file (i.e. Chrome) will place restrictions upon it, such as requiring the file to be signed with at least one key that hashes to the declared CRX ID).
import binascii
import string
import struct
import sys
def decode(proto, data):
index = 0
length = len(data)
msg = dict()
while index < length:
item = 128
key = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
key += value
left += 7
field = key >> 3
wire = key & 7
if wire == 0:
item = 128
num = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
num += value
left += 7
continue
elif wire == 1:
index += 8
continue
elif wire == 2:
item = 128
_length = 0
left = 0
while item & 128:
item = data[index]
index += 1
value = (item & 127) << left
_length += value
left += 7
last = index
index += _length
item = data[last:index]
if field not in proto:
continue
msg[proto[field]] = item
continue
elif wire == 5:
index += 4
continue
raise ValueError(
'invalid wire type: {wire}'.format(wire=wire)
)
return msg
def get_extension_id(crx_file):
with open(crx_file, 'rb') as f:
f.read(8); # 'Cr24\3\0\0\0'
data = f.read(struct.unpack('<I', f.read(4))[0])
crx3 = decode(
{10000: "signed_header_data"},
[ord(d) for d in data])
signed_header = decode(
{1: "crx_id"},
crx3['signed_header_data'])
return string.translate(
binascii.hexlify(bytearray(signed_header['crx_id'])),
string.maketrans('0123456789abcdef', string.ascii_lowercase[:16]))
def main():
if len(sys.argv) != 2:
print 'usage: %s crx_file' % sys.argv[0]
else:
print get_extension_id(sys.argv[1])
if __name__ == "__main__":
main()
(Thanks to https://github.com/thelinuxkid/python-protolite for the protobuf parser skeleton.)
A nice and simple way to get the public key from the .crx file using python, since chrome only generates the private .pem key for you. The public key is actually stored in the .crx file.
This is based on the format of the .crx file found here http://developer.chrome.com/extensions/crx.html
import struct
import hashlib
import string
def get_pub_key_from_crx(crx_file):
with open(crx_file, 'rb') as f:
data = f.read()
header = struct.unpack('<4sIII', data[:16])
pubkey = struct.unpack('<%ds' % header[2], data[16:16+header[2]])[0]
return pubkey
def get_extension_id(crx_file):
pubkey = get_pub_key_from_crx(crx_file)
digest = hashlib.sha256(pubkey).hexdigest()
trans = string.maketrans('0123456789abcdef', string.ascii_lowercase[:16])
return string.translate(digest[:32], trans)
if __name__ == '__main__':
import sys
if len(sys.argv) != 2:
print 'usage: %s crx_file' % sys.argv[0]
print get_extension_id(sys.argv[1])
Although this isn't possible to do "bypassing interaction with the browser", because you still need to generate the .crx file with a command like
chrome.exe --pack-extension=my_extension --pack-extension-key=my_extension.pem

Python - Read .b4u files - error sequence item 0: expected str instance, bytes found

I'm trying to use http://grantcox.com.au/2012/01/decoding-b4u-binary-file-format/ Python code to export .b4u files to HTML format, but for some reason after at the program point:
# find the initial caret position - this changes between files for some reason - search for the "Cards" string
for i in range(3):
addr = 104 + i*4
if ''.join(self.parser.read('sssss', addr)) == 'Cards':
caret = addr + 32
break
if caret is None:
return
I get the following error:
if ''.join(self.parser.read('sssss', addr)) == 'Cards':
TypeError: sequence item 0: expected str instance, bytes found
The Python version I'm using is: Python 3.3.1 (v3.3.1:d9893d13c628, Apr 6 2013, 20:25:12).
Any idea how to solve that problem?
I have got it working under Python 2.7.4 My Python 3.3.2 is giving me the same error. I'll get back to you if I find out how to port this piece of code to Python 3.x.x Must have something to do with unicode being default for strings in Python 3.
Here is a solution I came up with:
def read(self, fmt, offset):
if self.filedata is None:
return None
read = struct.unpack_from('<' + fmt, self.filedata, offset)
xread = []
for each in range(0,len(read)):
try:
xread.append(read[each].decode())
except:
xread.append(read[each])
read = xread
if len(read) == 1:
return read[0]
return read
def string(self, offset):
if self.filedata is None:
return None
s = u''
if offset > 0:
length = self.read('H', offset)
for i in range(length):
raw = self.read('H', offset + i*2 +2)
char = raw ^ 0x7E
s = s + chr(char)
return s
def plain_fixed_string(self, offset):
if self.filedata is None:
return None
plain_bytes = struct.unpack_from('<ssssssssssssssssssssssss', self.filedata, offset)
xplain_bytes = []
for each in range(0,len(plain_bytes)):
try:
xplain_bytes.append(plain_bytes[each].decode())
except:
xplain_bytes.append(plain_bytes[each])
plain_bytes = xplain_bytes
plain_string = ''.join(plain_bytes).strip('\0x0')
return plain_string
You can just use these methods instead of provided by original author.
Beware that you should also change unicode() to str() and unichr() to chr() if you see it anywhere. Also remember that print is a function and cannot be used without brackets ().

Categories

Resources