python subprocess popen: Piping stdout messes up the strings - python

I am trying to concatenate a couple of files together and add a header.
import subprocess
outpath = "output.tab"
with open( outpath, "w" ) as outf :
"write a header"
if header is True:
p1 = subprocess.Popen(["head", "-n1", files[-1] ], stdout= outf, )
if type(header) is str:
p1 = subprocess.Popen(["head", "-n1", header ], stdout= outf,)
for fl in files:
print( fl )
p1 = subprocess.Popen(["tail", "-n+2", fl], stdout= outf, )
for some reason some files (fl) are printed only partially and next file starts amid of a string from a previous file:
awk '{print NF}' output.tab | uniq -c
108 11
1 14
69 11
1 10
35 11
1 16
250 11
1 16
Is there any way to fix it in Python?
An example of messed up lines:
$tail -n+108 output.tab | head -n1
CENPA chr2 27008881.0 2701ABCD3 chr1 94883932.0 94944260.0 0.0316227766017 0.260698861451 0.277741584016 0.302602378581 0.4352790705329718 56 16
$grep -n A1 'CENPA' file1.tab
109:CENPA chr2 27008881.0 27017455.0 1.0 0.417081004817 0.0829327365256 0.545205239241 0.7196619496326693 95 3
110-CENPO chr2 25016174.0 25045245.0 1000.0 0.151090930896 -0.0083671250883 0.50882773122 0.0876177652747541 82 0
$grep -n 'ABCD3' file2.tab
2:ABCD3 chr1 94883932.0 94944260.0 0.0316227766017 0.260698861451 0.277741584016 0.302602378581 0.4352790705329718 56 16

I think the issue here is that subprocess.Popen() runs asynchronously by default, and you seem to want it to run synchronously. So really, all of your head and tail commands are running at the same time, directing into the output file.
To fix this, you probably want to just add .wait():
import subprocess
outpath = "output.tab"
with open( outpath, "w" ) as outf :
"write a header"
if header is True:
p1 = subprocess.Popen(["head", "-n1", files[-1] ], stdout= outf, )
p1.wait() # Pauses the script until the command finishes
if type(header) is str:
p1 = subprocess.Popen(["head", "-n1", header ], stdout= outf,)
p1.wait()
for fl in files:
print( fl )
p1 = subprocess.Popen(["tail", "-n+2", fl], stdout= outf, )
p1.wait()

Related

why doesn't scons Package copy h files?

I have a project with many subdirectories and types of files (python, c++, configuration files, images, etc).
When I use SCons env.Package like this:
env.Package(
NAME = 'isid',
VERSION = '1.0',
PACKAGEVERSION = '11',
PACKAGETYPE = 'rpm',
LICENSE = 'gpl',
SUMMARY = 'just a test',
DESCRIPTION = 'the most test app in the world',
X_RPM_GROUP = 'Application/isid',
SOURCE_URL = 'http://isid.com/versions/isid-1.0.11.tar.gz',
)
I get everything in isid-1.0.11.tar.gz except for the h files.
This automatically leads to build errors in ./isid-1.0.11 that stops rpmbuild from running.
EDIT
My project is split into few subdirectories.
In each I have SConscript that starts with these lines, or similar, depending on the includes it needs:
# import all variables
Import('*')
include = Dir([
'../inc/',
])
local_env = env.Clone( CPPPATH = include )
SConstruct just defines the variables and calls SConscript() on each subdirectory.
The call to Package is done in SConstruct, so I guess SCons indeed does not know the dependencies.
snippet of
# scons --tree=prune:
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
scons: `.' is up to date.
+-.
+-SConstruct
+-correlator
| +-correlator/SConscript
| +-correlator/collector
| +-correlator/correlation_command
| | +-correlator/correlation_command/app_command_device.cpp
| | +-correlator/correlation_command/app_command_device.h
| | +-correlator/correlation_command/app_command_device.o
| | | +-correlator/correlation_command/app_command_device.cpp
| | | +-correlator/correlation_command/app_command_device.h
| | | +-correlator/entity/entity.h
| | | +-infra/inc/app_command/app_command.h
| | | +-/bin/g++
| | +-correlator/correlation_command/app_command_event.cpp
| | +-correlator/correlation_command/app_command_event.h
EDIT #2
Here is a complete, minimal, example that produces the same problem.
To reproduce the problem, run:
scons pack=1.0 ver=1
files:
SConstruct.
app1/SConscript
app1/main.cpp
app1/inc.h
Listing
SConstruct
1
2 # main file
3
4 env = Environment(tools=['default', 'packaging'])
5
6 Export( 'env' )
7
8 flags = [
9 '-Wall',
10 '-Werror',
11 '-g',
12 '-ggdb3',
13 '-gdwarf-2',
14 '-std=c++11',
15 ]
16
17 env.Append( CPPFLAGS = flags )
18
19
20 scripts = []
21
22 Sapp1 = 'app1/SConscript'
23 scripts.append( Sapp1 )
24
25
26 env.SConscript( scripts )
27
28 pack = ARGUMENTS.get('pack', '')
29 ver = ARGUMENTS.get('ver', '99' )
30 if pack:
31 env.Package(
32 NAME = 'app1',
33 VERSION = pack,
34 PACKAGEVERSION = ver,
35 PACKAGETYPE = 'rpm',
36 LICENSE = 'private',
37 SUMMARY = 'exampe app #1',
38 DESCRIPTION = 'the most powerfull exampe #1',
39 X_RPM_GROUP = 'Application/app1',
40 SOURCE_URL = 'http://example.com/1/app1-1.0.1.tar.gz',
41 )
42
app1/SConscript
1
2 # import all variables
3 Import('*')
4
5 # add specific include directory
6
7 include = Dir( [
8 '.',
9 ])
10
11 local_env = env.Clone( CPPPATH = include )
12
13 # define sources
14 sources = [
15 'main.cpp',
16 ]
17
18 libs = [
19 ]
20
21 main_name = 'app1',
22
23 main_obj = local_env.Program( target = main_name, source = sources, LIBS = libs )
24
25 # install
26 install_dir = '/opt/rf/app1'
27 install_files = [ main_obj ]
28
29 local_env.Install( dir = install_dir, source = install_files )
30 local_env.Command( install_dir, install_files, "chown -R rf:rfids $TARGET" )
31
32
33 local_env.Alias( 'install', install_dir )
app1/main.cpp
1
2 #include "inc.h"
3
4
5 int main()
6 {
7 int l = g;
8
9 return l;
10 }
app1/inc.h
1
2 int g = 100;
output:
# scons pack=1.0 ver=1
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o app1/main.o -c -Wall -Werror -g -ggdb3 -gdwarf-2 -std=c++11 -Iapp1 app1/main.cpp
g++ -o app1/app1 app1/main.o
LC_ALL=C rpmbuild -ta /home/ran/work/rpmexample/app1-1.0.1.tar.gz
scons: *** [app1-1.0-1.src.rpm] app1-1.0-1.src.rpm: Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.wU9lDZ
+ umask 022
+ cd /home/ran/work/rpmexample/rpmtemp/BUILD
+ '[' -n /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 -a /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 '!=' / ']'
+ rm -rf /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
+ cd /home/ran/work/rpmexample/rpmtemp/BUILD
+ rm -rf app1-1.0
+ /usr/bin/gzip -dc /home/ran/work/rpmexample/app1-1.0.1.tar.gz
+ /usr/bin/tar -xf -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd app1-1.0
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.gVaX4j
+ umask 022
+ cd /home/ran/work/rpmexample/rpmtemp/BUILD
+ cd app1-1.0
+ '[' '!' -e /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 -a /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 '!=' / ']'
+ mkdir /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.JWAdxE
+ umask 022
+ cd /home/ran/work/rpmexample/rpmtemp/BUILD
+ '[' /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 '!=' / ']'
+ rm -rf /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
++ dirname /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
+ mkdir -p /home/ran/work/rpmexample/rpmtemp/BUILDROOT
+ mkdir /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
+ cd app1-1.0
+ scons --install-sandbox=/home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64 /home/ran/work/rpmexample/rpmtemp/BUILDROOT/app1-1.0-1.x86_64
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
g++ -o app1/main.o -c -Wall -Werror -g -ggdb3 -gdwarf-2 -std=c++11 -Iapp1 app1/main.cpp
app1/main.cpp:2:17: fatal error: inc.h: No such file or directory
#include "inc.h"
^
compilation terminated.
scons: *** [app1/main.o] Error 1
scons: building terminated because of errors.
error: Bad exit status from /var/tmp/rpm-tmp.JWAdxE (%install)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.JWAdxE (%install)
scons: building terminated because of errors.

parsing from colum in perl or bash

a file I am working with looks like this
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 1 100000
404 AAAAAAGA
992 TTTTTTTA
1146 CCCCGGCC
1727 CCCCCACC
1778 GCCCCCCC
would need to split the file based on the number in the column - create a new file for every 1000 units so the output would e be
file1
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 404 992
404 AAAAAAGA
992 TTTTTTTA
file2
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 1146 1778
1146 CCCCGGCC
1727 CCCCCACC
1778 GCCCCCCC
so split the first colum every 1000 units (first is from 1 to 1000) file 2 is from 1000 to 2000 also the start an end positions would be changed in every file (line starting with REG) as the first number is the number in the first line of the file adn the other number is the number in the last line of hte file. The header needs to be present in all files. Is there a way to name the files from that systematically with file1, file2....? /t is used throughout all files to make space...
i tried
awk '
NR==1 {
h = $0
k = 1000
f = "file"k/1000
print > f
getline
print "REGION chr",k-999,k > f
next
}
$1 <=k {
print > f
next
}
{
k=1000*int(1+$1/1000)
f="file"k/1000
print h > f
print "REGION chr",k-999,k > f
print > f
}' file
You can use this awk command:
awk 'function print_vals() {
fn="file" c;
print hdr > fn;
print "REGION chr", sn, en >> fn;
for (i in a)
print a[i] >> fn;
} NR == 1 {
hdr=$0;
c=0;
next
} NF==2 && $1 >= 1000*c {
if (c)
print_vals();
delete a;
i=0;
c++;
sn=$1;
} NF==2 {
a[++i]=$0;
en=$1;
} END {
print print_vals();
}' file
Verification:
cat file1
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 404 992
404 AAAAAAGA
992 TTTTTTTA
cat file2
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 1146 1778
1146 CCCCGGCC
1727 CCCCCACC
1778 GCCCCCCC
This short Perl program will process a file specified as a parameter on the command line. It pushes onto #header any line that doesn't start with a number. Otherwise it divides the number by 1,000 and checks to see if there is already a file open for that millenium. If not tehn it opens a file for output and prints the header lines to it. Then the current line is printed to the selected file handle
use strict;
use warnings;
use 5.010;
use autodie;
my (#header, #fh);
while ( <> ) {
if ( /^(\d+)/ ) {
my $n = int $1 / 1000;
unless ( $fh[$n] ) {
my $file = sprintf 'file%d.txt', $n+1;
open $fh[$n], '>', $file;
print { $fh[$n] } #header;
}
print { $fh[$n] } $_;
}
else {
push #header, $_;
}
}
close $_ for grep $_, #fh;
output - file1.txt
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 1 100000
404 AAAAAAGA
992 TTTTTTTA
output - file2.txt
NAMES n0 n1 n2 n3 n4 n5 n6 n7
REGION chr 1 100000
1146 CCCCGGCC
1727 CCCCCACC
1778 GCCCCCCC
You have an awk answer, but as this question is tagged perl I'll chip in a perl one too.
#!/usr/bin/env perl
use strict;
use warnings;
my %seen;
my $header = <> . <>;
print $header;
my $last_sequence_number = 0;
open( my $output, ">", "output.$last_sequence_number.out" ) or die $!;
print {$output} $header;
$seen{$last_sequence_number}++;
while (<>) {
my ($key) = split;
next unless $key =~ m/^\d+$/;
my $sequence_number = int( $key / 1000 );
if ( not $sequence_number == $last_sequence_number ) {
print "Opening new file for $sequence_number\n";
close($output);
open( $output, ">", "output.$sequence_number.out" ) or die $!;
print {$output} $header unless $seen{$sequence_number}++;
$last_sequence_number = $sequence_number;
}
print {$output} $_;
}
What this does is:
read two lines from your input to figure out the headers.
run through the rest of the input, extracting the 'number bit'.
divides it by 1000 to figure out a 'file number' to write to.
opens a new file for that if it's relevant. (And if it's the first time it's done so, writes some headers).
prints the current line to the currently open file.
Invoke via either a pipe or myscript.pl <filename>

subprocess.call where is stdout on windows?

I am using subprocess.call to run an executable from a python script
works fine under linux, the stdout from the executable is printed to the terminal as desired
when running the same subprocess.call on windows (7) the command executes successfully but i get no stdout to the cmd window
my subprocess.call looks like this:
call([self.test_dict["exe"],
"-p",
self.test_dict["pulse_input"],
"-l",
self.test_dict["library_input"],
"-c",
self.test_dict["config"],
"-r",
self.test_dict["pulse_output"],
"-s",
self.test_dict["stream_output"],
"-u",
self.test_dict["library_output"],
track_output_flag,
track_output_string,
socket_output_flag,
socket_output_string,
"-f",
self.test_dict["frame_size"],
"-d",
self.test_dict["ddi_enable"],
self.test_dict["verbose"],
])
how can i get the executables stdout displayed in the windows cmd window, i.e. how can i get the same behaviour i observe under linux?
I can now see some output using the Popen approach suggested below. my Popen call looks like this:
print(subprocess.Popen([self.test_dict["exe"],
"-p",
self.test_dict["pulse_input"],
"-l",
self.test_dict["library_input"],
"-c",
self.test_dict["config"],
"-r",
self.test_dict["pulse_output"],
"-s",
self.test_dict["stream_output"],
"-u",
self.test_dict["library_output"],
track_output_flag,
track_output_string,
socket_output_flag,
socket_output_string,
"-f",
self.test_dict["frame_size"],
"-d",
self.test_dict["ddi_enable"],
self.test_dict["verbose"]],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate())
On linux:
(' Test Harness - Version 2.2.2\n \n\nFrame 1 of 5: 1000 of 4032
pdws Rx: 0.206, G2: 32.337 (kpps) lib: ddi_n 0 residue: 11 \nFrame 2
of 5: 2000 of 4032 pdws Rx: 0.174, G2: 35.396 (kpps) lib: ddi_n 0
residue: 18 \nFrame 3 of 5: 3000 of 4032 pdws Rx: 0.197, G2: 31.913
(kpps) lib: ddi_n 0 residue: 25 \nFrame 4 of 5: 4000 of 4032 pdws Rx:
0.139, G2: 33.908 (kpps) lib: ddi_n 0 residue: 13 \nFrame 5 of 5: 4032 of 4032 pdws Rx: 2.581, G2: 20.164 (kpps) lib: ddi_n 0 residue: 0
\nlibrary status: lib.n: 52 lib.ddi_n: 0 lib.orig_n: 52\nelapsed time:
0.300253 (s)\n\n', '')
On Windows:
('', '')
the first bit of the linux output is what i would like to see printed to stdout on windows
i was mistaken that adding stdout=subprocess.PIPE was fixing the problem on windows, it wasn't - apologies for the confusion

Python: compare column in two files

I'm just trying to solving this text-processing task using python, but I'm not able to compare column.
What I have tried :
#!/usr/bin/env python
import sys
def Main():
print "This is your input Files %s,%s" % ( file1,file2 )
f1 = open(file1, 'r')
f2 = open(file2, 'r')
for line in f1:
column1_f1 = line.split()[:1]
#print column1_f1
for check in f2:
column2_f2 = check.split()[:1]
print column1_f1,column2_f2
if column1_f1 == column2_f2:
print "Match",line
else:
print line,check
f1.close()
f2.close()
if __name__ == '__main__':
if len(sys.argv) != 3:
print >> sys.stderr, "This Script need exact 2 argument, aborting"
exit(1)
else:
ThisScript, file1, file2 = sys.argv
Main()
I'm new in Python, Please help me to learn and understand this..
I would resolve it in similar way in python3 that user46911 did with awk. Read second file and save its keys in a dictionary. Later check if exists for each line of first file:
import sys
codes = {}
with open(sys.argv[2], 'r') as f2:
for line in f2:
fields = line.split()
codes[fields[0]] = fields[1]
with open(sys.argv[1], 'r') as f1:
for line in f1:
fields = line.split(None, 1)
if fields[0] in codes:
print('{0:4s}{1:s}'.format(codes[fields[0]], line[4:]), end='')
else:
print(line, end='')
Run it like:
python3 script.py file1 file2
That yields:
060090 AKRABERG FYR DN 6138 -666 101
EKVG 060100 VAGA FLOGHAVN DN 6205 -728 88
060110 TORSHAVN DN 6201 -675 55
060120 KIRKJA DN 6231 -631 55
060130 KLAKSVIK HELIPORT DN 6221 -656 75
060160 HORNS REV A DN 5550 786 21
060170 HORNS REV B DN 5558 761 10
060190 SILSTRUP DN 5691 863 0
060210 HANSTHOLM DN 5711 858 0
EKGF 060220 TYRA OEST DN 5571 480 43
EKTS 060240 THISTED LUFTHAVN DN 5706 870 8
060290 GROENLANDSHAVNEN DN 5703 1005 0
EKYT 060300 FLYVESTATION AALBORG DN 5708 985 13
060310 TYLSTRUP DN 5718 995 0
060320 STENHOEJ DN 5736 1033 56
060330 HIRTSHALS DN 5758 995 0
EKSN 060340 SINDAL FLYVEPLADS DN 5750 1021 28

How to merge only the unique lines from file_a to file_b?

This question has been asked here in one form or another but not quite the thing I'm looking for. So, this is the situation I shall be having: I already have one file, named file_a and I'm creating another file - file_b. file_a is always bigger than file_b in size. There will be a number of duplicate lines in file_b (hence, in file_a as well) but both the files will have some unique lines. What I want to do is: to copy/merge only the unique lines from file_a to file_b and then sort the line order, so that the file_b becomes the most up-to-date one with all the unique entries. Either of the original files shouldn't be more than 10MB in size. What's the most efficient (and fastest) way I can do that?
I was thinking something like that, which does the merging alright.
#!/usr/bin/env python
import os, time, sys
# Convert Date/time to epoch
def toEpoch(dt):
dt_ptrn = '%d/%m/%y %H:%M:%S'
return int(time.mktime(time.strptime(dt, dt_ptrn)))
# input files
o_file = "file_a"
c_file = "file_b"
n_file = [o_file,c_file]
m_file = "merged.file"
for x in range(len(n_file)):
P = open(n_file[x],"r")
output = P.readlines()
P.close()
# Sort the output, order by 2nd last field
#sp_lines = [ line.split('\t') for line in output ]
#sp_lines.sort( lambda a, b: cmp(toEpoch(a[-2]),toEpoch(b[-2])) )
F = open(m_file,'w')
#for line in sp_lines:
for line in output:
if "group_" in line:
F.write(line)
F.close()
But, it's:
not with only the unique lines
not sorted (by next to last field)
and introduces the 3rd file i.e. m_file
Just a side note (long story short): I can't use sorted() here as I'm using v2.3, unfortunately. The input files look like this:
On 23/03/11 00:40:03
JobID Group.User Ctime Wtime Status QDate CDate
===================================================================================
430792 group_atlas.pltatl16 0 32 4 02/03/11 21:52:38 02/03/11 22:02:15
430793 group_atlas.atlas084 30 472 4 02/03/11 21:57:43 02/03/11 22:09:35
430794 group_atlas.atlas084 12 181 4 02/03/11 22:02:37 02/03/11 22:05:42
430796 group_atlas.atlas084 8 185 4 02/03/11 22:02:38 02/03/11 22:05:46
I tried to use cmp() to sort by the 2nd last field but, I think, it doesn't work just because of the first 3 lines of the input files.
Can anyone please help? Cheers!!!
Update 1:
For the future reference, as suggested by Jakob, here is the complete script. It worked just fine.
#!/usr/bin/env python
import os, time, sys
from sets import Set as set
def toEpoch(dt):
dt_ptrn = '%d/%m/%y %H:%M:%S'
return int(time.mktime(time.strptime(dt, dt_ptrn)))
def yield_lines(fileobj):
#I want to discard the headers
for i in xrange(3):
fileobj.readline()
#
for line in fileobj:
yield line
def app(path1, path2):
file1 = set(yield_lines(open(path1)))
file2 = set(yield_lines(open(path2)))
return file1.union(file2)
# Input files
o_file = "testScript/03"
c_file = "03.bak"
m_file = "finished.file"
print time.strftime('%H:%M:%S', time.localtime())
# Sorting the output, order by 2nd last field
sp_lines = [ line.split('\t') for line in app(o_file, c_file) ]
sp_lines.sort( lambda a, b: cmp(toEpoch(a[-2]),toEpoch(b[-2])) )
F = open(m_file,'w')
print "No. of lines: ",len(sp_lines)
for line in sp_lines:
MF = '\t'.join(line)
F.write(MF)
F.close()
It took about 2m:47s to finish for 145244 lines.
[testac1#serv07 ~]$ ./uniq-merge.py
17:19:21
No. of lines: 145244
17:22:08
thanks!!
Update 2:
Hi eyquem, this is the Error message I get when I run your script(s).
From the first script:
[testac1#serv07 ~]$ ./uniq-merge_2.py
File "./uniq-merge_2.py", line 44
fm.writelines( '\n'.join(v)+'\n' for k,v in output )
^
SyntaxError: invalid syntax
From the second script:
[testac1#serv07 ~]$ ./uniq-merge_3.py
File "./uniq-merge_3.py", line 24
output = sett(line.rstrip() for line in fa)
^
SyntaxError: invalid syntax
Cheers!!
Update 3:
The previous one wasn't sorting the list at all. Thanks to eyquem to pointing that out. Well, it does now. This is a further modification to Jakob's version - I converted the set:app(path1, path2) to a list:myList() and then applied the sort( lambda ... ) to the myList to sort the merged file by the nest to last field. This is the final script.
#!/usr/bin/env python
import os, time, sys
from sets import Set as set
def toEpoch(dt):
# Convert date/time to epoch
dt_ptrn = '%d/%m/%y %H:%M:%S'
return int(time.mktime(time.strptime(dt, dt_ptrn)))
def yield_lines(fileobj):
# Discard the headers (1st 3 lines)
for i in xrange(3):
fileobj.readline()
for line in fileobj:
yield line
def app(path1, path2):
# Remove duplicate lines
file1 = set(yield_lines(open(path1)))
file2 = set(yield_lines(open(path2)))
return file1.union(file2)
print time.strftime('%H:%M:%S', time.localtime())
# I/O files
o_file = "testScript/03"
c_file = "03.bak"
m_file = "finished.file"
# Convert set into to list
myList = list(app(o_file, c_file))
# Sort the list by the date
sp_lines = [ line.split('\t') for line in myList ]
sp_lines.sort( lambda a, b: cmp(toEpoch(a[-2]),toEpoch(b[-2])) )
F = open(m_file,'w')
print "No. of lines: ",len(sp_lines)
# Finally write to the outFile
for line in sp_lines:
MF = '\t'.join(line)
F.write(MF)
F.close()
There is no speed boost at all, it took 2m:50s to process the same 145244 lines. Is anyone see any scope of improvement, please let me know. Thanks to Jakob and eyquem for their time. Cheers!!
Update 4:
Just for future reference, this is a modified version of eyguem, which works much better and faster then the previous ones.
#!/usr/bin/env python
import os, sys, re
from sets import Set as sett
from time import mktime, strptime, strftime
def sorting_merge(o_file, c_file, m_file ):
# RegEx for Date/time filed
pat = re.compile('[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d')
def kl(lines,pat = pat):
# match only the next to last field
line = lines.split('\t')
line = line[-2]
return mktime(strptime((pat.search(line).group()),'%d/%m/%y %H:%M:%S'))
output = sett()
head = []
# Separate the header & remove the duplicates
def rmHead(f_n):
f_n.readline()
for line1 in f_n:
if pat.search(line1): break
else: head.append(line1) # line of the header
for line in f_n:
output.add(line.rstrip())
output.add(line1.rstrip())
f_n.close()
fa = open(o_file, 'r')
rmHead(fa)
fb = open(c_file, 'r')
rmHead(fb)
# Sorting date-wise
output = [ (kl(line),line.rstrip()) for line in output if line.rstrip() ]
output.sort()
fm = open(m_file,'w')
# Write to the file & add the header
fm.write(strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head[0]+head[1])))
for t,line in output:
fm.write(line + '\n')
fm.close()
c_f = "03_a"
o_f = "03_b"
sorting_merge(o_f, c_f, 'outfile.txt')
This version is much faster - 6.99 sec. for 145244 lines compare to the 2m:47s - then the previous one using lambda a, b: cmp(). Thanks to eyquem for all his support. Cheers!!
EDIT 2
My previous codes have problems with output = sett(line.rstrip() for line in fa) and output.sort(key=kl)
Moreover, they have some complications.
So I examined the choice of reading the files directly with a set() function taken by Jakob Bowyer in his code.
Congratulations Jakob ! (and Michal Chruszcz by the way) : set() is unbeatable, it's faster than a reading one line at a time.
Then , I abandonned my idea to read the files line after line.
.
But I kept my idea to avoid a sorting with the help of cmp() function because, as it is described in the doc:
s.sort([cmpfunc=None])
The sort() method takes an optional
argument specifying a comparison
function of two arguments (list items)
(...) Note that this slows the sorting
process down considerably
http://docs.python.org/release/2.3/lib/typesseq-mutable.html
Then, I managed to obtain a list of tuples (t,line) in which the t is
time.mktime(time.strptime(( 1st date-and-hour in line ,'%d/%m/%y %H:%M:%S'))
by the instruction
output = [ (kl(line),line.rstrip()) for line in output]
.
I tested 2 codes. The following one in which 1st date-and-hour in line is computed thanks to a regex:
def kl(line,pat = pat):
return time.mktime(time.strptime((pat.search(line).group()),'%d/%m/%y %H:%M:%S'))
output = [ (kl(line),line.rstrip()) for line in output if line.rstrip()]
output.sort()
And a second code in which kl() is:
def kl(line,pat = pat):
return time.mktime(time.strptime(line.split('\t')[-2],'%d/%m/%y %H:%M:%S'))
.
The results are
Times of execution:
0.03598 seconds for the first code with regex
0.03580 seconds for the second code with split('\t')
that is to say the same
This algorithm is faster than a code using a function cmp() :
a code in which the set of lines output isn't transformed in a list of tuples by
output = [ (kl(line),line.rstrip()) for line in output]
but is only transformed in a list of the lines (without duplicates, then) and sorted with a function mycmp() (see the doc):
def mycmp(a,b):
return cmp(time.mktime(time.strptime(a.split('\t')[-2],'%d/%m/%y %H:%M:%S')),
time.mktime(time.strptime(b.split('\t')[-2],'%d/%m/%y %H:%M:%S')))
output = [ line.rstrip() for line in output] # not list(output) , to avoid the problem of newline of the last line of each file
output.sort(mycmp)
for line in output:
fm.write(line+'\n')
has an execution time of
0.11574 seconds
.
The code:
#!/usr/bin/env python
import os, time, sys, re
from sets import Set as sett
def sorting_merge(o_file , c_file, m_file ):
pat = re.compile('[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'(?=[ \t]+[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d)')
def kl(line,pat = pat):
return time.mktime(time.strptime((pat.search(line).group()),'%d/%m/%y %H:%M:%S'))
output = sett()
head = []
fa = open(o_file)
fa.readline() # first line is skipped
while True:
line1 = fa.readline()
mat1 = pat.search(line1)
if not mat1: head.append(line1) # line1 is here a line of the header
else: break # the loop ends on the first line1 not being a line of the heading
output = sett( fa )
fa.close()
fb = open(c_file)
while True:
line1 = fb.readline()
if pat.search(line1): break
output = output.union(sett( fb ))
fb.close()
output = [ (kl(line),line.rstrip()) for line in output]
output.sort()
fm = open(m_file,'w')
fm.write(time.strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head)))
for t,line in output:
fm.write(line + '\n')
fm.close()
te = time.clock()
sorting_merge('ytre.txt','tataye.txt','merged.file.txt')
print time.clock()-te
This time, I hope it will run correctly, and that the only thing to do is to wait the times of execution on real files much bigger than the ones on which I tested the codes
.
EDIT 3
pat = re.compile('[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'(?=[ \t]+'
'[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'|'
'[ \t]+aborted/deleted)')
.
EDIT 4
#!/usr/bin/env python
import os, time, sys, re
from sets import Set
def sorting_merge(o_file , c_file, m_file ):
pat = re.compile('[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'(?=[ \t]+'
'[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'|'
'[ \t]+aborted/deleted)')
def kl(line,pat = pat):
return time.mktime(time.strptime((pat.search(line).group()),'%d/%m/%y %H:%M:%S'))
head = []
output = Set()
fa = open(o_file)
fa.readline() # first line is skipped
for line1 in fa:
if pat.search(line1): break # first line after the heading
else: head.append(line1) # line of the header
for line in fa:
output.add(line.rstrip())
output.add(line1.rstrip())
fa.close()
fb = open(c_file)
for line1 in fb:
if pat.search(line1): break
for line in fb:
output.add(line.rstrip())
output.add(line1.rstrip())
fb.close()
if '' in output: output.remove('')
output = [ (kl(line),line) for line in output]
output.sort()
fm = open(m_file,'w')
fm.write(time.strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head)))
for t,line in output:
fm.write(line+'\n')
fm.close()
te = time.clock()
sorting_merge('A.txt','B.txt','C.txt')
print time.clock()-te
Maybe something along these lines?
from sets import Set as set
def yield_lines(fileobj):
#I want to discard the headers
for i in xrange(3):
fileobj.readline()
for line in fileobj:
yield line
def app(path1, path2):
file1 = set(yield_lines(open(path1)))
file2 = set(yield_lines(open(path2)))
return file1.union(file2)
EDIT: Forgot about with :$
I wrote this new code, with the ease of using a set. It is faster that my previous code. And, it seems, than your code
#!/usr/bin/env python
import os, time, sys, re
from sets import Set as sett
def sorting_merge(o_file , c_file, m_file ):
# Convert Date/time to epoch
def toEpoch(dt):
dt_ptrn = '%d/%m/%y %H:%M:%S'
return int(time.mktime(time.strptime(dt, dt_ptrn)))
pat = re.compile('([0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d)'
'[ \t]+[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d')
fa = open(o_file)
head = []
fa.readline()
while True:
line1 = fa.readline()
mat1 = pat.search(line1)
if not mat1:
head.append(('',line1.rstrip()))
else:
break
output = sett((toEpoch(pat.search(line).group(1)) , line.rstrip())
for line in fa)
output.add((toEpoch(mat1.group(1)) , line1.rstrip()))
fa.close()
fb = open(c_file)
while True:
line1 = fb.readline()
mat1 = pat.search(line1)
if mat1: break
for line in fb:
output.add((toEpoch(pat.search(line).group(1)) , line.rstrip()))
output.add((toEpoch(mat1.group(1)) , line1.rstrip()))
fb.close()
output = list(output)
output.sort()
output[0:0] = head
output[0:0] = [('',time.strftime('On %d/%m/%y %H:%M:%S'))]
fm = open(m_file,'w')
fm.writelines( line+'\n' for t,line in output)
fm.close()
te = time.clock()
sorting_merge('ytr.txt','tatay.txt','merged.file.txt')
print time.clock()-te
Note that this code put a heading in the merged file
.
EDIT
Aaaaaah... I got it... :-))
Execution's time divided by 3 !
#!/usr/bin/env python
import os, time, sys, re
from sets import Set as sett
def sorting_merge(o_file , c_file, m_file ):
pat = re.compile('[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d'
'(?=[ \t]+[0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d)')
def kl(line,pat = pat):
return time.mktime(time.strptime((pat.search(line).group()),'%d/%m/%y %H:%M:%S'))
fa = open(o_file)
head = []
fa.readline()
while True:
line1 = fa.readline()
mat1 = pat.search(line1)
if not mat1:
head.append(line1.rstrip())
else:
break
output = sett(line.rstrip() for line in fa)
output.add(line1.rstrip())
fa.close()
fb = open(c_file)
while True:
line1 = fb.readline()
mat1 = pat.search(line1)
if mat1: break
for line in fb:
output.add(line.rstrip())
output.add(line1.rstrip())
fb.close()
output = list(output)
output.sort(key=kl)
output[0:0] = [time.strftime('On %d/%m/%y %H:%M:%S')] + head
fm = open(m_file,'w')
fm.writelines( line+'\n' for line in output)
fm.close()
te = time.clock()
sorting_merge('ytre.txt','tataye.txt','merged.file.txt')
print time.clock()-te
Last codes, I hope.
Because I found a killer code.
First , I created two files "xxA.txt" and "yyB.txt" of 30 lines having 30000 lines as
430559 group_atlas.atlas084 12 181 4 04/03/10 01:38:02 02/03/11 22:05:42
430502 group_atlas.atlas084 12 181 4 23/01/10 21:45:05 02/03/11 22:05:42
430544 group_atlas.atlas084 12 181 4 17/06/11 12:58:10 02/03/11 22:05:42
430566 group_atlas.atlas084 12 181 4 25/03/10 23:55:22 02/03/11 22:05:42
with the following code:
create AB.py
from random import choice
n = tuple( str(x) for x in xrange(500,600))
days = ('01','02','03','04','05','06','07','08','09','10','11','12','13','14','15','16',
'17','18','19','20','21','22','23','24','25','26','27','28')
# not '29','30,'31' to avoid problems with strptime() on last days of february
months = days[0:12]
hours = days[0:23]
ms = ['00','01','02','03','04','05','06','07','09'] + [str(x) for x in xrange(10,60)]
repeat = 30000
with open('xxA.txt','w') as f:
# 430794 group_atlas.atlas084 12 181 4 02/03/11 22:02:37 02/03/11 22:05:42
ch = ('On 23/03/11 00:40:03\n'
'JobID Group.User Ctime Wtime Status QDate CDate\n'
'===================================================================================\n')
f.write(ch)
for i in xrange(repeat):
line = '430%s group_atlas.atlas084 12 181 4 \t%s/%s/%s %s:%s:%s\t02/03/11 22:05:42\n' %\
(choice(n),
choice(days),choice(months),choice(('10','11')),
choice(hours),choice(ms),choice(ms))
f.write(line)
with open('yyB.txt','w') as f:
# 430794 group_atlas.atlas084 12 181 4 02/03/11 22:02:37 02/03/11 22:05:42
ch = ('On 25/03/11 13:45:24\n'
'JobID Group.User Ctime Wtime Status QDate CDate\n'
'===================================================================================\n')
f.write(ch)
for i in xrange(repeat):
line = '430%s group_atlas.atlas084 12 181 4 \t%s/%s/%s %s:%s:%s\t02/03/11 22:05:42\n' %\
(choice(n),
choice(days),choice(months),choice(('10','11')),
choice(hours),choice(ms),choice(ms))
f.write(line)
with open('xxA.txt') as g:
print 'readlines of xxA.txt :',len(g.readlines())
g.seek(0,0)
print 'set of xxA.txt :',len(set(g))
with open('yyB.txt') as g:
print 'readlines of yyB.txt :',len(g.readlines())
g.seek(0,0)
print 'set of yyB.txt :',len(set(g))
Then I ran these 3 programs:
"merging regex.py"
#!/usr/bin/env python
from time import clock,mktime,strptime,strftime
from sets import Set
import re
infunc = []
def sorting_merge(o_file, c_file, m_file ):
infunc.append(clock()) #infunc[0]
pat = re.compile('([0123]\d/[01]\d/\d{2} [012]\d:[0-6]\d:[0-6]\d)')
output = Set()
def rmHead(filename, a_set):
f_n = open(filename, 'r')
f_n.readline()
head = []
for line in f_n:
head.append(line) # line of the header
if line.strip('= \r\n')=='': break
for line in f_n:
a_set.add(line.rstrip())
f_n.close()
return head
infunc.append(clock()) #infunc[1]
head = rmHead(o_file, output)
infunc.append(clock()) #infunc[2]
head = rmHead(c_file, output)
infunc.append(clock()) #infunc[3]
if '' in output: output.remove('')
infunc.append(clock()) #infunc[4]
output = [ (mktime(strptime(pat.search(line).group(),'%d/%m/%y %H:%M:%S')),line)
for line in output ]
infunc.append(clock()) #infunc[5]
output.sort()
infunc.append(clock()) #infunc[6]
fm = open(m_file,'w')
fm.write(strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head)))
for t,line in output:
fm.write(line + '\n')
fm.close()
infunc.append(clock()) #infunc[7]
c_f = "xxA.txt"
o_f = "yyB.txt"
t1 = clock()
sorting_merge(o_f, c_f, 'zz_mergedr.txt')
t2 = clock()
print 'merging regex'
print 'total time of execution :',t2-t1
print ' launching :',infunc[1] - t1
print ' preparation :',infunc[1] - infunc[0]
print ' reading of 1st file :',infunc[2] - infunc[1]
print ' reading of 2nd file :',infunc[3] - infunc[2]
print ' output.remove(\'\') :',infunc[4] - infunc[3]
print 'creation of list output :',infunc[5] - infunc[4]
print ' sorting of output :',infunc[6] - infunc[5]
print 'writing of merging file :',infunc[7] - infunc[6]
print 'closing of the function :',t2-infunc[7]
"merging split.py"
#!/usr/bin/env python
from time import clock,mktime,strptime,strftime
from sets import Set
infunc = []
def sorting_merge(o_file, c_file, m_file ):
infunc.append(clock()) #infunc[0]
output = Set()
def rmHead(filename, a_set):
f_n = open(filename, 'r')
f_n.readline()
head = []
for line in f_n:
head.append(line) # line of the header
if line.strip('= \r\n')=='': break
for line in f_n:
a_set.add(line.rstrip())
f_n.close()
return head
infunc.append(clock()) #infunc[1]
head = rmHead(o_file, output)
infunc.append(clock()) #infunc[2]
head = rmHead(c_file, output)
infunc.append(clock()) #infunc[3]
if '' in output: output.remove('')
infunc.append(clock()) #infunc[4]
output = [ (mktime(strptime(line.split('\t')[-2],'%d/%m/%y %H:%M:%S')),line)
for line in output ]
infunc.append(clock()) #infunc[5]
output.sort()
infunc.append(clock()) #infunc[6]
fm = open(m_file,'w')
fm.write(strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head)))
for t,line in output:
fm.write(line + '\n')
fm.close()
infunc.append(clock()) #infunc[7]
c_f = "xxA.txt"
o_f = "yyB.txt"
t1 = clock()
sorting_merge(o_f, c_f, 'zz_mergeds.txt')
t2 = clock()
print 'merging split'
print 'total time of execution :',t2-t1
print ' launching :',infunc[1] - t1
print ' preparation :',infunc[1] - infunc[0]
print ' reading of 1st file :',infunc[2] - infunc[1]
print ' reading of 2nd file :',infunc[3] - infunc[2]
print ' output.remove(\'\') :',infunc[4] - infunc[3]
print 'creation of list output :',infunc[5] - infunc[4]
print ' sorting of output :',infunc[6] - infunc[5]
print 'writing of merging file :',infunc[7] - infunc[6]
print 'closing of the function :',t2-infunc[7]
"merging killer"
#!/usr/bin/env python
from time import clock,strftime
from sets import Set
import re
infunc = []
def sorting_merge(o_file, c_file, m_file ):
infunc.append(clock()) #infunc[0]
patk = re.compile('([0123]\d)/([01]\d)/(\d{2}) ([012]\d:[0-6]\d:[0-6]\d)')
output = Set()
def rmHead(filename, a_set):
f_n = open(filename, 'r')
f_n.readline()
head = []
for line in f_n:
head.append(line) # line of the header
if line.strip('= \r\n')=='': break
for line in f_n:
a_set.add(line.rstrip())
f_n.close()
return head
infunc.append(clock()) #infunc[1]
head = rmHead(o_file, output)
infunc.append(clock()) #infunc[2]
head = rmHead(c_file, output)
infunc.append(clock()) #infunc[3]
if '' in output: output.remove('')
infunc.append(clock()) #infunc[4]
output = [ (patk.search(line).group(3,2,1,4),line)for line in output ]
infunc.append(clock()) #infunc[5]
output.sort()
infunc.append(clock()) #infunc[6]
fm = open(m_file,'w')
fm.write(strftime('On %d/%m/%y %H:%M:%S\n')+(''.join(head)))
for t,line in output:
fm.write(line + '\n')
fm.close()
infunc.append(clock()) #infunc[7]
c_f = "xxA.txt"
o_f = "yyB.txt"
t1 = clock()
sorting_merge(o_f, c_f, 'zz_mergedk.txt')
t2 = clock()
print 'merging killer'
print 'total time of execution :',t2-t1
print ' launching :',infunc[1] - t1
print ' preparation :',infunc[1] - infunc[0]
print ' reading of 1st file :',infunc[2] - infunc[1]
print ' reading of 2nd file :',infunc[3] - infunc[2]
print ' output.remove(\'\') :',infunc[4] - infunc[3]
print 'creation of list output :',infunc[5] - infunc[4]
print ' sorting of output :',infunc[6] - infunc[5]
print 'writing of merging file :',infunc[7] - infunc[6]
print 'closing of the function :',t2-infunc[7]
results
merging regex
total time of execution : 14.2816595405
launching : 0.00169211450059
preparation : 0.00168093989599
reading of 1st file : 0.163582242995
reading of 2nd file : 0.141301478261
output.remove('') : 2.37460347614e-05
creation of output : 13.4460212122
sorting of output : 0.216363532237
writing of merging file : 0.232923737514
closing of the function : 0.0797514767938
merging split
total time of execution : 13.7824474898
launching : 4.10666718815e-05
preparation : 2.70984161395e-05
reading of 1st file : 0.154349784679
reading of 2nd file : 0.136050810927
output.remove('') : 2.06730184981e-05
creation of output : 12.9691854691
sorting of output : 0.218704332534
writing of merging file : 0.225259076223
closing of the function : 0.0788362766776
merging killer
total time of execution : 2.14315311024
launching : 0.00206199391263
preparation : 0.00205026057781
reading of 1st file : 0.158711791582
reading of 2nd file : 0.138976601775
output.remove('') : 2.37460347614e-05
creation of output : 0.621466415424
sorting of output : 0.823161602941
writing of merging file : 0.227701565422
closing of the function : 0.171049393149
During killer program, sorting output takes 4 times longer , but time of creation of output as a list is divided by 21 !
Then globaly, the execution's time is reduced at least by 85 %.

Categories

Resources