I have the following code. The file file.txt contains a list of variables. Some of them should be str type and others should be int type.
var = [None] * 3
j = 0
with open("file.txt", "r") as f:
content = f.readline().split(";")
for i in range(2, 5):
var[j] = int(content[i])
j += 1
Instead of incrementing j manually I'd like to do it in a cleaner way (e.g. within the 'instructions' of the for loop, or something like that.
What would be a shorter/better way to handle this task?
You can use enumerate:
for i, j in enumerate(range(2,5)):
var[j] = int(content[i])
Also, you don't need to initialize var at all - just use a list comprehension:
var = [int(content[i]) for i in range(2, 5)]
Another approach (may be less Pythonic/less efficient/less readable):
You can zip two ranges together:
for i, j in zip(range(len(range(2, 5))), range(2,5)):
var[j] = int(content[i])
You know that the second range is range(2, 5) and want the first range to be from zero to len(range(2, 5)) - that's range(len(range(2, 5))).
The idiomatic way to count the current iteration index is by using enumerate:
for j, i in enumerate(range(2, 5)):
var[j] = int(content[i])
(There's no need to initialize j = 0 in this case.)
However, your example code would usually just be written as:
with open("file.txt", "r") as f:
content = f.readline().split(";")
var = [int(x) for x in content[2:5]]
which uses language features such as
a slice ([2:5]) to select a part of a list
a list comprehension to create a new list from an input sequence
Related
I'm new to programming and python and I'm looking for a way to distinguish between two input formats in the same input file text file. For example, let's say I have an input file like so where values are comma-separated:
5
Washington,A,10
New York,B,20
Seattle,C,30
Boston,B,20
Atlanta,D,50
2
New York,5
Boston,10
Where the format is N followed by N lines of Data1, and M followed by M lines of Data2. I tried opening the file, reading it line by line and storing it into one single list, but I'm not sure how to go about to produce 2 lists for Data1 and Data2, such that I would get:
Data1 = ["Washington,A,10", "New York,B,20", "Seattle,C,30", "Boston,B,20", "Atlanta,D,50"]
Data2 = ["New York,5", "Boston,10"]
My initial idea was to iterate through the list until I found an integer i, remove the integer from the list and continue for the next i iterations all while storing the subsequent values in a separate list, until I found the next integer and then repeat. However, this would destroy my initial list. Is there a better way to separate the two data formats in different lists?
You could use itertools.islice and a list comprehension:
from itertools import islice
string = """
5
Washington,A,10
New York,B,20
Seattle,C,30
Boston,B,20
Atlanta,D,50
2
New York,5
Boston,10
"""
result = [[x for x in islice(parts, idx + 1, idx + 1 + int(line))]
for parts in [string.split("\n")]
for idx, line in enumerate(parts)
if line.isdigit()]
print(result)
This yields
[['Washington,A,10', 'New York,B,20', 'Seattle,C,30', 'Boston,B,20', 'Atlanta,D,50'], ['New York,5', 'Boston,10']]
For a file, you need to change it to:
with open("testfile.txt", "r") as f:
result = [[x for x in islice(parts, idx + 1, idx + 1 + int(line))]
for parts in [f.read().split("\n")]
for idx, line in enumerate(parts)
if line.isdigit()]
print(result)
You're definitely on the right track.
If you want to preserve the original list here, you don't actually have to remove integer i; you can just go on to the next item.
Code:
originalData = []
formattedData = []
with open("data.txt", "r") as f :
f = list(f)
originalData = f
i = 0
while i < len(f): # Iterate through every line
try:
n = int(f[i]) # See if line can be cast to an integer
originalData[i] = n # Change string to int in original
formattedData.append([])
for j in range(n):
i += 1
item = f[i].replace('\n', '')
originalData[i] = item # Remove newline char in original
formattedData[-1].append(item)
except ValueError:
print("File has incorrect format")
i += 1
print(originalData)
print(formattedData)
The following code will produce a list results which is equal to [Data1, Data2].
The code assumes that the number of entries specified is exactly the amount that there is. That means that for a file like this, it will not work.
2
New York,5
Boston,10
Seattle,30
The code:
# get the data from the text file
with open('filename.txt', 'r') as file:
lines = file.read().splitlines()
results = []
index = 0
while index < len(lines):
# Find the start and end values.
start = index + 1
end = start + int(lines[index])
# Everything from the start up to and excluding the end index gets added
results.append(lines[start:end])
# Update the index
index = end
This document has a word and tens of thousands of floats per line, I want to transform it to a dictionary with the word as key and a vector with all the floats.
That is how I am doing, but due to the size of the file (about 20k lines each one with about 10k values) the process is taking a bit too long. I could not find a more efficient way of doing the parsing. Just some alternative ways that were not guaranteed to decrease run time.
with open("googlenews.word2vec.300d.txt") as g_file:
i = 0;
#dict of words: [lots of floats]
google_words = {}
for line in g_file:
google_words[line.split()[0]] = [float(line.split()[i]) for i in range(1, len(line.split()))]
In your solution you preform slow line.split() for every word, twice. Consider following modification:
with open("googlenews.word2vec.300d.txt") as g_file:
i = 0;
#dict of words: [lots of floats]
google_words = {}
for line in g_file:
word, *numbers = line.split()
google_words[word] = [float(number) for number in numbers]
One advanced concept I used here is "unpacking":
word, *numbers = line.split()
Python allows to unpack iterable values into multilple variables:
a, b, c = [1, 2, 3]
# This is practically equivalent to
a = 1
b = 2
c = 3
The * is a shortcut for "take the leftovers, put them in the list and assign the list to the name":
a, *rest = [1, 2, 3, 4]
# results in
a == 1
rest == [2, 3, 4]
Just don't call line.split() more than once.
with open("googlenews.word2vec.300d.txt") as g_file:
i = 0;
#dict of words: [lots of floats]
google_words = {}
for line in g_file:
temp = line.split()
google_words[temp[0]] = [float(temp[i]) for i in range(1, len(temp))]
Here's a simple generator of such file:
s = "x"
for i in range (10000):
s += " 1.2345"
print (s)
The former version takes some time.
The version with only one split call is instant.
You could also use the csv module, which should be more efficient that what you are doing.
It would be something like:
import csv
d = {}
with (open("huge_file_so_huge.txt", "r")) as g_file:
for row in csv.reader(g_file, delimiter=" "):
d[row[0]] = list(map(float, row[1:]))
For starters I've programmed in C++ for the past year and a half, and this is the first time I'm using Python.
The objects have two int attributes, say i_ and j_.
The text file is as follows:
1,0
2,0
3,1
4,0
...
What I want to do is have the list filled with objects with correct attributes. For example,
print(myList[2].i_, myList[2].j_, end = ' ')
would return
3 1
Here's my attempt after reading a little online.
class myClass:
def __init__(self, i, j):
self.i_ = i
self.j_ = j
with open("myFile.txt") as f:
myList = [list(map(int, line.strip().split(','))) for line in f]
for line in f:
i = 0
while (i < 28):
myList.append(myClass(line.split(","), line.split(",")))
i +=1
But it doesn't work obviously.
Thanks in advance!
Since you're working with a CSV file you might want to use the csv module. First you would pass the file object to the csv.reader function and it will return an iterable of rows from the file. From there you can cast it to a list and slice it to the 29 rows you are required to have. Finally, you can iterate over the rows (e.g. [1,0]) and simply unpack them in the class constructor.
class MyClass:
def __init__(self, i, j):
self.i = int(i)
self.j = int(j)
def __repr__(self):
return f"MyClass(i={self.i}, j={self.j})"
with open('test.txt') as f:
rows = [r.strip().split(',') for r in f.readlines()[:29]]
my_list = [MyClass(*row) for row in rows]
for obj in my_list:
print(obj.i, obj.j)
print(len(my_list))
I not sure you really what to stick with this format
print(myList[2].i_, myList[2].j_, end = ' ')
My solution is quite manual coded and i am using dictionary to store i and j
result = {'i':[],
'j':[]}
and below is my code
result = {'i':[],
'j':[]}
with open('a.txt', 'r') as myfile:
data=myfile.read().replace('\n', ',')
print(data)
a = data.split(",")
print (a)
b = [x for x in a if x]
print(b)
for i in range( 0, len(b)):
if i % 2 == 0:
result['i'].append(b[i])
else:
result['j'].append(b[i])
print(result['i'])
print(result['j'])
print(str(result['i'][2])+","+ str(result['j'][2]))
The result: 3,1
I'm not sure what you're trying to do with myList = [list(map(int, line.strip().split(','))) for line in f]. This will give you a list of lists with those pairs converted to ints. But you really want objects from those numbers. So let's do that directly as we iterate through the lines in the file and do away with the next while loop:
my_list = []
with open("myFile.txt") as f:
for line in f:
nums = [int(i) for i in line.strip().split(',') if i]
if len(nums) >= 2:
my_list.append(myClass(nums[0], nums[1]))
I have the following code for producing a big text file:
d = 3
n = 100000
f = open("input.txt",'a')
s = ""
for j in range(0, d-1):
s += str(round(random.uniform(0,1000), 3))+" "
s += str(round(random.uniform(0,1000), 3))
f.write(s)
for i in range(0, n-1):
s = ""
for j in range(0, d-1):
s += str(round(random.uniform(0,1000), 3))+" "
s += str(round(random.uniform(0,1000), 3))
f.write("\n"+s)
f.close()
But it seems to be pretty slow to even generate 5GB of this.
How can I make it better? I wish the output to be like:
796.802 691.462 803.664
849.483 201.948 452.155
144.174 526.745 826.565
986.685 238.462 49.885
137.617 416.243 515.474
366.199 687.629 423.929
Well, of course, the whole thing is I/O bound. You can't output the file
faster than the storage device can write it. Leaving that aside, there
are some optimizations that could be made.
Your method of building up a long string from several shorter strings is
suboptimal. You're saying, essentially, s = s1 + s2. When you tell
Python to do this, it concatenates two string objects to make a new
string object. This is slow, especially when repeated.
A much better way is to collect the individual string objects in a list
or other iterable, then use the join method to run them together. For
example:
>>> ''.join(['a', 'b', 'c'])
'abc'
>>> ', '.join(['a', 'b', 'c'])
'a, b, c'
Instead of n-1 string concatenations to join n strings, this does
the whole thing in one step.
There's also a lot of repeated code that could be combined. Here's a
cleaner design, still using the loops.
import random
d = 3
n = 1000
f = open('input.txt', 'w')
for i in range(n):
nums = []
for j in range(d):
nums.append(str(round(random.uniform(0, 1000), 3)))
s = ' '.join(nums)
f.write(s)
f.write('\n')
f.close()
A cleaner, briefer, more Pythonic way is to use a list comprehension:
import random
d = 3
n = 1000
f = open('input.txt', 'w')
for i in range(n):
nums = [str(round(random.uniform(0, 1000), 3)) for j in range(d)]
f.write(' '.join(nums))
f.write('\n')
f.close()
Note that in both cases, I wrote the newline separately. That should be
faster than concatenating it to the string, since I/O is buffered
anyway. If I were joining a list of strings without separators, I'd just
tack on a newline as the last string before joining.
As Daniel's answer says, numpy is probably faster, but maybe you don't
want to get into numpy yet; it sounds like you're kind of a beginner at
this point.
Using numpy is probably faster:
import numpy
d = 3
n = 100000
data = numpy.random.uniform(0, 1000,size=(n,d))
numpy.savetxt("input.txt", data, fmt='%.3f')
This could be a bit faster:
nlines = 100000
col = 3
for line in range(nlines):
f.write('{} {} {}\n'.format(*((round(random.uniform(0,1000), 3))
for e in range(col))))
or use string formatting:
for line in range(nlines):
numbers = [random.uniform(0, 1000) for e in range(col)]
f.write('{:6.3f} {:6.3f} {:6.3f}\n'.format(*numbers))
I guess its better if you want to use a infinite loop and want to make a so big file without limitation the better is use like that
import random
d = 3
n = 1000
f = open('input.txt', 'w')
for i in range(10**9):
nums = [str(round(random.uniform(0, 1000), 3)) for j in range(d)]
f.write(' '.join(nums))
f.write('\n')
f.close()
The code will not stopped while you click on ctr-c
I'm having some troubles trying to use four lists with the zip function.
In particular, I'm getting the following error at line 36:
TypeError: zip argument #3 must support iteration
I've already read that it happens with not iterable objects, but I'm using it on two lists! And if I try use the zip only on the first 2 lists it works perfectly: I have problems only with the last two.
Someone has ideas on how to solve that? Many thanks!
import numpy
#setting initial values
R = 330
C = 0.1
f_T = 1/(2*numpy.pi*R*C)
w_T = 2*numpy.pi*f_T
n = 10
T = 1
w = (2*numpy.pi)/T
t = numpy.linspace(-2, 2, 100)
#making the lists c_k, w_k, a_k, phi_k
c_karray = []
w_karray = []
A_karray = []
phi_karray = []
#populating the lists
for k in range(1, n, 2):
c_k = 2/(k*numpy.pi)
w_k = k*w
A_k = 1/(numpy.sqrt(1+(w_k)**2))
phi_k = numpy.arctan(-w_k)
c_karray.append(c_k)
w_karray.append(w_k)
A_karray.append(A_k)
phi_karray.append(phi_k)
#making the function w(t)
w = []
#doing the sum for each t and populate w(t)
for i in t:
w_i = ([(A_k*c_k*numpy.sin(w_k*i+phi_k)) for c_k, w_k, A_k, phi_k in zip(c_karray, w_karray, A_k, phi_k)])
w.append(sum(w_i)
Probably you mistyped the last 2 elements in zip. They should be A_karray and phi_karray, because phi_k and A_k are single values.
My result for w is:
[-0.11741034896740517,
-0.099189027720991918,
-0.073206290274556718,
...
-0.089754003567358978,
-0.10828235682188027,
-0.1174103489674052]
HTH,
Germán.
I believe you want zip(c_karray, w_karray, A_karray, phi_karray). Additionally, you should produce this once, not each iteration of the for the loop.
Furthermore, you are not really making use of numpy. Try this instead of your loops.
d = numpy.arange(1, n, 2)
c_karray = 2/(d*numpy.pi)
w_karray = d*w
A_karray = 1/(numpy.sqrt(1+(w_karray)**2))
phi_karray = numpy.arctan(-w_karray)
w = (A_karray*c_karray*numpy.sin(w_karray*t[:,None]+phi_karray)).sum(axis=-1)