I wrote a simple procedure to calculate the average of the test coverage of some specific packages in a Java project. The raw data in a huge html file is like this:
<body>
package pkg1 <line_coverage>11/111,<branch_coverage>44/444<end>
package pkg2 <line_coverage>22/222,<branch_coverage>55/555<end>
package pkg3 <line_coverage>33/333,<branch_coverage>66/666<end>
...
</body>
Given the specified packages "pkg1" and "pkg3", for example, the average line coverage is:
(11+33)/(111+333)
and average branch coverage is:
(44+66)/(444+666)
I wrote the follow procedure to get the result and it works well. But how to implement this calculation in a functional style? Something like "(x,y) for x in ... for b in ... if...". I know a little Erlang, Haskell and Clojure, So solutions in these languages are also appreciated. Thanks a lot!
from __future__ import division
import re
datafile = ('abc', 'd>11/23d>34/89d', 'e>25/65e>13/25e', 'f>36/92f>19/76')
core_pkgs = ('d', 'f')
covered_lines, total_lines, covered_branches, total_branches = 0, 0, 0, 0
for line in datafile:
for pkg in core_pkgs:
ptn = re.compile('.*'+pkg+'.*'+'>(\d+)/(\d+).*>(\d+)/(\d+).*')
match = ptn.match(line)
if match is not None:
cvln, tlln, cvbh, tlbh = match.groups()
covered_lines += int(cvln)
total_lines += int(tlln)
covered_branches += int(cvbh)
total_branches += int(tlbh)
print 'Line coverage:', '{:.2%}'.format(covered_lines / total_lines)
print 'Branch coverage:', '{:.2%}'.format(covered_branches/total_branches)
Down below you can find my Haskell solution. I will try to explain the important points I went through as I wrote it.
First you will find that I created a data structure for coverage data. It's generally a good idea to create data structures to represent whatever data you want to handle. This is in part because it makes it easier to design your code when you can think in terms of whatever you are designing – closely related to functional programming philosophies, and in part because it can eliminate a few bugs where you think you are doing something but are in actuality doing something else.
Related to the point before: The first thing I do is to convert the string-represented data into my own data structure. When you are doing functional programming, you are often doing things in "sweeps." You don't have a single function that converts data to your format, filters out the unwanted data and summarises the result. You have three different functions for each of those tasks, and you do them one at a time!
This is because functions are very composable, i.e. if you have three different ones, you can stick them together to form a single one if you want to. If you start with a single one, it is very difficult to take it apart to form three different ones.
The actual workings of the conversion function is actually quite uninteresting unless you are specifically doing Haskell. All it does is try to match each string with a regex, and if it succeeds, it adds the coverage data to the resulting list.
Again, mad composition is about to happen. I don't create a function to loop over a list of coverages and sum them up. I create a single function to sum two coverages, because I know I can use it together with the specialised fold loop (which is sort of like a for loop on steroids) to summarise all coverages in a list. There's no need for me to reinvent the wheel and create a loop myself.
Besides, my sumCoverages function works with a lot of specialised loops, so I don't have to write a ton of functions, I just stick my single function into a ton of pre-made library functions!
In the main function you will see what I mean by programming in "sweeps" or "passes" over the data. First I convert it to the internal format, then I filter out the unwanted data, then I summarise the remaining data. These are completely independent computations. That's functional programming.
You will also notice that I use two specialised loops there, filter and fold. This means that I don't have to write any loops myself, I just stick in a function to those standard library loops and let those take it from there.
import Data.Maybe (catMaybes)
import Data.List (foldl')
import Text.Printf (printf)
import Text.Regex (matchRegex, mkRegex)
corePkgs = ["d", "f"]
stats = [
"d>11/23d>34/89d",
"e>25/65e>13/25e",
"f>36/92f>19/76"
]
format = mkRegex ".*(\\w+).*>([0-9]+)/([0-9]+).*>([0-9]+)/([0-9]+).*"
-- It might be a good idea to define a datatype for coverage data.
-- A bit of coverage data is defined as the name of the package it
-- came from, the lines covered, the total amount of lines, the
-- branches covered and the total amount of branches.
data Coverage = Coverage String Int Int Int Int
-- Then we need a way to convert the string data into a list of
-- coverage data. We do this by regex. We try to match on each
-- string in the list, and then we choose to keep only the successful
-- matches. Returned is a list of coverage data that was represented
-- by the strings.
convert :: [String] -> [Coverage]
convert = catMaybes . map match
where match line = do
[name, cl, tl, cb, tb] <- matchRegex format line
return $ Coverage name (read cl) (read tl) (read cb) (read tb)
-- We need a way to summarise two coverage data bits. This can of course also
-- be used to summarise entire lists of coverage data, by folding over it.
sumCoverage (Coverage nameA clA tlA cbA tbA) (Coverage nameB clB tlB cbB tbB) =
Coverage (nameA ++ nameB ++ ",") (clA + clB) (tlA + tlB) (cbA + cbB) (tbA + tbB)
main = do
-- First we need to convert the strings to coverage data
let coverageData = convert stats
-- Then we want to filter out only the relevant data
relevantData = filter (\(Coverage name _ _ _ _) -> name `elem` corePkgs) coverageData
-- Then we need to summarise it, but we are only interested in the numbers
Coverage _ cl tl cb tb = foldl' sumCoverage (Coverage "" 0 0 0 0) relevantData
-- So we can finally print them!
printf "Line coverage: %.2f\n" (fromIntegral cl / fromIntegral tl :: Double)
printf "Branch coverage: %.2f\n" (fromIntegral cb / fromIntegral tb :: Double)
Here are some quickly-hacked, untested ideas applied to your code:
import numpy as np
import re
datafile = ('abc', 'd>11/23d>34/89d', 'e>25/65e>13/25e', 'f>36/92f>19/76')
core_pkgs = ('d', 'f')
covered_lines, total_lines, covered_branches, total_branches = 0, 0, 0, 0
for pkg in core_pkgs:
ptn = re.compile('.*'+pkg+'.*'+'>(\d+)/(\d+).*>(\d+)/(\d+).*')
matches = map(datafile, ptn.match)
statsList = [map(int, match.groups()) for match in matches if matches]
# statsList is a list of [cvln, tlln, cvbh, tlbh]
stats = np.array(statsList)
covered_lines, total_lines, covered_branches, total_branches = stats.sum(axis=1)
Well, as you can see I haven't bothered to finish off the remaining loop, but I think the point is made by now. There's certainly a lot more than one way to do this; I elected to show off map() (which some will say makes this less efficient, and it probably does), as well as NumPy to get the (admittedly light) math done.
This is the corresponding Clojure solution:
(defn extract-data
"extract 4 integer from a string line according to a package name"
[pkg line]
(map read-string
(rest (first
(re-seq
(re-pattern
(str pkg ".*>(\\d+)/(\\d+).*>(\\d+)/(\\d+)"))
line)))))
(defn scan-lines-by-pkg
"scan all string lines and extract all data as integer sequences
according to package names"
[pkgs lines]
(filter seq (for [pkg pkgs
line lines]
(extract-data pkg line))))
(defn sum-data
"add all data in valid lines together"
[pkgs lines]
(apply map + (scan-lines-by-pkg pkgs lines)))
(defn get-percent
[covered all]
(str (format "%.2f" (float (/ (* covered 100) all))) "%"))
(defn get-cov
[pkgs lines]
{:line-cov (apply get-percent (take 2 (sum-data pkgs lines)))
:branch-cov (apply get-percent (drop 2 (sum-data pkgs lines)))})
(get-cov ["d" "f"] ["abc" "d>11/23d>34/89d" "e>25/65e>13/25e" "f>36/92f>19/76"])
Related
I often work with groups of materials and my file/materials are named as alphanumeric strings. Is there a library to turn a string like r"Mxene - Ti3C2" to latex styled r"Mxene - Ti$_\mathrm{3}$C$_\mathrm{2}$"?
I usually use a dictionary but going through every name is a hassle and prone to error because materials can always be added or removed from the study.
I know that I can use str.maketrans() to generate subscripts but I haven't had very consistent results using the output with matplotlib so I'd much rather use latex.
I've ultimately created this solution in case anyone else needs it. Since my problem is mostly to create subscripts, the following code will look for numbers and replace them with a latex equivalent to create one.
def latexify(s):
import re
nums = re.findall(r'\d+', s)
pos = [[m.start(0), m.end(0)] for m in re.finditer(r'\d+', s)]
numpos = list(zip(nums, pos))
for num, pos in numpos:
string = f"$_\mathrm{{{num}}}$"
s = s[:pos[0]] + string + s[pos[1]:]
for ind, (n, [p_st, p_end]) in enumerate(numpos):
if p_st > pos[1]:
numpos[ind][1][0] += len(string)-len(num)
numpos[ind][1][1] += len(string)-len(num)
pass
return s
latexify("Ti32C2")
Returns:
'Ti$_\\mathrm{32}$C$_\\mathrm{2}$'
I have a very big file with millions of paths to various executables on windows systems. A simple example would be the following:
C:\windows\ccmcache\1d\Deploy-Application.exe
C:\WINDOWS\ccmcache\7\Deploy-Application.exe
C:\windows\ccmcache\2o\Deploy-Application.exe
C:\WINDOWS\ccmcache\6\Deploy-Application.exe
C:\WINDOWS\ccmcache\15\Deploy-Application.exe
C:\WINDOWS\ccmcache\m\Deploy-Application.exe
C:\WINDOWS\ccmcache\1g\Deploy-Application.exe
C:\windows\ccmcache\2r\Deploy-Application.exe
C:\windows\ccmcache\1l\Deploy-Application.exe
C:\windows\ccmcache\2s\Deploy-Application.exe
or
C:\Users\user23452345\temp\test\1\Another1-Application.exe
C:\Users\user1324asdf\temp\Another-Applicatiooon.exe
C:\Users\user23452---5\temp\lili\Another-Application.exe
C:\Users\user23hkjhf_5\temp\An0ther-Application.exe
As a human, I can recognize that these strings are similar and match them fairly easily with some regex in code. My issue however is to find these patterns in the first place as there are far too many of those, completely unknown to me and are changing frequently.
My goal is to write a python script that finds these similar strings with a degree of certainty and groups them for me.
Which methods, libraries, keywords etc. should I look into to solve this problem?
One possible way is to approach this by calculating the distance between strings. For that, you could use the textdistance lib.
Hope this helps!
Edit:
Two starting points to get more familiarized with the subject:
https://en.wikipedia.org/wiki/Edit_distance
https://en.wikipedia.org/wiki/Levenshtein_distance
Try fuzzywuzzy, a soft string matcher. It makes a difference if you keep the strings as they are or lower case them first:
from fuzzywuzzy import fuzz
import itertools
lines = [
'C:\windows\ccmcache\1d\Deploy-Application.exe',
'C:\WINDOWS\ccmcache\m\Deploy-Application.exe',
'user5323\A-different-Application.bat',
]
for line1, line2 in itertools.combinations(lines, r=2):
case_match = fuzz.ratio(line1, line2)
insensitive_case_match = fuzz.ratio(line1.lower(), line2.lower())
print(line1[:10], '...', line1[:-10])
print(line2[:10], '...', line2[:-10])
print(case_match, insensitive_case_match)
print()
C:\windows ... C:\windows\ccmcached\Deploy-Appli
C:\WINDOWS ... C:\WINDOWS\ccmcache\m\Deploy-Appli
80 95
C:\windows ... C:\windows\ccmcached\Deploy-Appli
user5323\A ... user5323\A-different-Appli
42 45
C:\WINDOWS ... C:\WINDOWS\ccmcache\m\Deploy-Appli
user5323\A ... user5323\A-different-Appli
40 45
One fairly straight-forward and simple way would be to simply check for "how much" a pair of strings differ. Like so:
import difflib
from collections import defaultdict
grouping_requirement = 0.75 # (0;1), the closer to 1, the stronger the equality needs to be to be grouped
s = r'''C:\windows\ccmcache\1d\Deploy-Application.exe
C:\WINDOWS\ccmcache\7\Deploy-Application.exe
C:\windows\ccmcache\2o\Deploy-Application.exe
C:\WINDOWS\ccmcache\6\Deploy-Application.exe
C:\WINDOWS\ccmcache\15\Deploy-Application.exe
C:\WINDOWS\ccmcache\m\Deploy-Application.exe
C:\WINDOWS\ccmcache\1g\Deploy-Application.exe
C:\windows\ccmcache\2r\Deploy-Application.exe
C:\windows\ccmcache\1l\Deploy-Application.exe
C:\windows\ccmcache\2s\Deploy-Application.exe
C:\Users\user23452345\temp\test\1\Another1-Application.exe
C:\Users\user1324asdf\temp\Another-Applicatiooon.exe
C:\Users\user23452---5\temp\lili\Another-Application.exe
C:\Users\user23hkjhf_5\temp\An0ther-Application.exe'''
groups = defaultdict(list)
def match_ratio(s1,s2):
return difflib.SequenceMatcher(None,s1,s2).ratio()
for line in set(s.splitlines()):
for group in groups:
if match_ratio(group, line) > grouping_requirement:
groups[group].append(line)
break
else:
groups[line].append(line)
for group in groups.values():
print(', '.join(group))
print()
The output of this little application is:
C:\WINDOWS\ccmcache\1g\Deploy-Application.exe, C:\WINDOWS\ccmcache\m\Deploy-Application.exe, C:\windows\ccmcache\1l\Deploy-Application.exe, C:\WINDOWS\ccmcache\15\Deploy-Application.exe, C:\WINDOWS\ccmcache\7\Deploy-Application.exe, C:\WINDOWS\ccmcache\6\Deploy-Application.exe, C:\windows\ccmcache\2s\Deploy-Application.exe, C:\windows\ccmcache\1d\Deploy-Application.exe, C:\windows\ccmcache\2o\Deploy-Application.exe, C:\windows\ccmcache\2r\Deploy-Application.exe
C:\Users\user23452345\temp\test\1\Another1-Application.exe, C:\Users\user23hkjhf_5\temp\An0ther-Application.exe, C:\Users\user1324asdf\temp\Another-Applicatiooon.exe, C:\Users\user23452---5\temp\lili\Another-Application.exe
As you see on the top of the code snippet, you see that there is a constant, grouping_requirement, which I arbitrarily set to 0.75. If you reduce that value closer to 0.0, more paths will be grouped together, if you raise that value closer to 1.0, fewer paths will be grouped. Good luck!
I would like to go through a gene and get a list of 10bp long sequences containing the exon/intron borders from each feature.type =='mRNA'. It seems like I need to use compoundLocation, and the locations used in 'join' but I can not figure out how to do it, or find a tutorial.
Could anyone please give me an example or point me to a tutorial?
Assuming all the info in the exact format you show in the comment, and that you're looking for 20 bp on either side of each intro/exon boundary, something like this might be a start:
Edit: If you're actually starting from a GenBank record, then it's not much harder. Assuming that the full junction string you're looking for is in the CDS feature info, then:
for f in record.features:
if f.type == 'CDS':
jct_info = str(f.location)
converts the "location" information into a string and you can continue as below.
(There are ways to work directly with the location information without converting to a string - in particular you can use "extract" to pull the spliced sequence directly out of the parent sequence -- but the steps involved in what you want to do are faster and more easily done by converting to str and then int.)
import re
jct_info = "join{[0:229](+), [11680:11768](+), [11871:12135](+), [15277:15339](+), [16136:16416](+), [17220:17471](+), [17547:17671](+)"
jctP = re.compile("\[\d+\:\d+\]")
jcts = jctP.findall(jct_info)
jcts
['[0:229]', '[11680:11768]', '[11871:12135]', '[15277:15339]', '[16136:16416]', '[17220:17471]', '[17547:17671]']
Now you can loop through the list of start:end values, pull them out of the text and convert them to ints so that you can use them as sequence indexes. Something like this:
for jct in jcts:
(start,end) = jct.replace('[', '').replace(']', '').split(':')
try: # You need to account for going out of index, e.g. where start = 0
start_20_20 = seq[int(start)-20:int(start)+20]
except IndexError:
# do your alternatives e.g. start = int(start)
I am trying to understand if there is an advantage in space/time/programming to storing data from a signal processing system as nested list in either :
data[channel][sample]
data[sample][channel]
I can code processing for both - thou I personally find 1) easy to write and index to then 2).
However, 2) is the more common was my local group programs in and stores the data (either in excel/csv or from the data gathering systems). While it is easy to transpose
dataA = map(list, zip(*dataB))
I was wondering if there are any storage or performance - or even - module compatibility issues with 1 over 2?
with 1) I can loop like this
for R in dataA :
for C in R :
process_channel(C)
matplotlib.loglog(dataA[0], dataA[i])
where dataA[0] is time or frequency and i is some other channel to plot
with 2)
for R in dataB :
for C in R
process_sample(C)
matplotlib.loglog([j[0] for j in dataB],[k[i] for k in dataB])
This looks worse in programming style. Maybe I am missing a list method of making this easier? I have also developed code to used dicts ... but this really breaks with general use. So I am less inclined to continue to use dicts. Although the dict storage is
dataC = list(['f':0.1,'chnl1':100.0],['f':0.2,'chnl1':110.0])
or some such. It seems that to be better integrated option 2 is better. However, I am trying to understand how better to code when using option 2) when you wish to process over channels then samples? Just transpose the matrix first and then do the work in option 1) space and transpose back the results:
dataA = smoothing(dataA, smooth_factor)
def smoothing(d, s) :
td = numpy.transpose(d)
td = map(list, zip(*d))
nd=[]
for row in td :
col = []
for i in xrange(0,len(row)-step,step) :
col.append(sum(row[i:i+step]/step)
nd.append(col)
nd = numpy.transpose(nd)
return nd
while this construction works - transposing back and forth all the time looks - um - inefficient.
I am rather new to python programming so please be a big simple with your answer.
I have a .raw file which is 2b/2b complex short int format. Its actually a 2-D raster file. I want to read and seperate both real and complex parts. Lets say the raster is [MxN] size.
Please let me know if question is not clear.
Cheers
N
You could do it with the struct module. Here's a simple example based on the file formatting information you mentioned in a comment:
import struct
def read_complex_array(filename, M, N):
row_fmt = '={}h'.format(N) # "=" prefix means integers in native byte-order
row_len = struct.calcsize(row_fmt)
result = []
with open(filename, "rb" ) as input:
for col in xrange(M):
reals = struct.unpack(row_fmt, input.read(row_len))
imags = struct.unpack(row_fmt, input.read(row_len))
cmplx = [complex(r,i) for r,i in zip(reals, imags)]
result.append(cmplx)
return result
This will return a list of complex-number lists, as can be seen in this output from a trivial test I ran:
[
[ 0.0+ 1.0j 1.0+ 2.0j 2.0+ 3.0j 3.0+ 4.0j],
[256.0+257.0j 257.0+258.0j 258.0+259.0j 259.0+260.0j],
[512.0+513.0j 513.0+514.0j 514.0+515.0j 515.0+516.0j]
]
Both the real and imaginary parts of complex numbers in Python are usually represented as a pair of machine-level double precision floating point numbers.
You could also use the array module. Here's the same thing using it:
import array
def read_complex_array2(filename, M, N):
result = []
with open(filename, "rb" ) as input:
for col in xrange(M):
reals = array.array('h')
reals.fromfile(input, N)
# reals.byteswap() # if necessary
imags = array.array('h')
imags.fromfile(input, N)
# imags.byteswap() # if necessary
cmplx = [complex(r,i) for r,i in zip(reals, imags)]
result.append(cmplx)
return result
As you can see, they're very similar, so it's not clear there's a big advantage to using one over the other. I suspect the array based version might be faster, but that would have to be determined by actually timing it with some real data to be able to say with any certainty.
Take a look at Hachoir library. It's designed for this purposes, and does it's work really good.