Writing data to new row when every time it updates - python

So i am trying to get this done, the script executes every 5 minutes and the variable (a & b) changes with it, first time the script executes the file write the value but next time when it runs it overwrites the previous data.I want it to write the value of (a & b) in next row with overwriting the previous data.
Tried using newline='' but got error.
import csv
a = 1
b = 4
#newline=''
with open('data.csv', mode='w') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
Is there any easy fix where i can achieve it fast
data it has:
A B
1 6
Result i want when every time it runs:
A B
1 6
4 7
6 2
3 9
Any help would be appreciated

import csv
a = 1
b = 4
#newline=''
with open('data.csv', mode='a') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
just change mode='a' w means write , a means append, use append instead of write.

Check out the doc regarding file opening modes: https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files
In summary w mode will always (over)write. You want to use a to append new data to your file.
In your code you will need to check whether file exist, then choose the appropriate mode.

Open your file in a+ mode. This means appending. w stands for write which means each time the document is opened the values are overwritten. Add time.sleep(300) to have time intervals of 5 minutes.
import csv
import time
a = 1
b = 4
while True:
with open("data.csv", 'a+', newline='') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
print("Round Done")
time.sleep(300)

Related

Writing Data to csv file to a single row

I need to write data to a single row and column by where data is coming from the serial port. So I used to read and write row by row.
But the requirement is I need to write data to the next column of a single row In PYTHON.
I need to have filed at the end like this
1,2,33,43343,4555,344323
In such a way all data is to be in a single row and multiple columns, and not in one column and multiple rows.
So this writes data in one after one row.
1
12
2222
3234
1233
131
but I want
1 , 12 , 2222 , 3234 , 1233 , 131
like every single row and multiple columns.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
while True:
test = 0
if test == 5 or time.time() > timeout:
break
ss=ser.readline()
print ss
s=ss.replace("\n","")
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'ab') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow([ s ])
csvfile.close()
time.sleep(0.02)
The csv module writes rows - every time you call writerow a newline is written and a new row is started. So, you can't call it multiple times and expect to get columns. You can, however, collect the data into a list and then write that list when you are done. The csv module is overkill for this.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
data = []
for _ in range(5):
data.append(ser.readline().strip())
time.sleep(0.02)
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'ab') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow(data)
# csv is overkill here unless the data itself contains commas
# that need to be escaped. You could do this instead.
# csvfile.write(','.join(data) + '\n')
UPDATE
One of the tricks to question writing here to to supply a short, runnable example of the problem. That way, everybody runs the same thing and you can talk about whats wrong in terms of the code and output everyone can play with.
Here is the program updated with mock data. I changed the open to "wb" so that the file is deleted if it already exists when the program runs. Run it and let me know how its results different from what you want.
import csv
import time
filename = 'deleteme.csv'
test_row = '1,2,33,43343,4555,344323'
test_data = test_row.split(',')
data = []
for _ in range(6):
data.append(test_data.pop(0).strip())
time.sleep(0.02)
with open(filename, 'wb') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow(data)
print repr(open(filename).read())
assert open(filename).read().strip() == test_row, 'got one row'
Assuming your serial port data wouldn't overrun your main memory heap, the following would be the code that would suit your need.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
result = []
while True:
test = 0
if test == 5 or time.time() > timeout:
break
ss=ser.readline()
print(ss)
s=ss.replace("\n","")
result.append(s)
time.sleep(0.02)
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'w') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow([result])
csvfile.close()

Write and recode from one csv file to another

I am trying to select specific columns from a large tab-delimited CSV file and output only certain columns to a new CSV file. Furthermore, I want to recode the data as this happens. If the cell has a value of 0 then just output 0. However, if the cell has a value of greater than 0, then just output 1 (i.e., all values greater than 0 are coded as 1).
Here's what I have so far:
import csv
outputFile = open('output.csv', 'wb')
outputWriter = csv.writer(outputFile)
included_cols = range(9,2844)
with open('source.txt', 'rb') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
content = list(row[i] for i in included_cols)
outputWriter.writerow(content)
The first issue I am having is that I want to also take from column 6. I wasn't sure how to write column 6 and then columns 9-2844.
Second, I wasn't sure how to do the recoding on the fly as I write the new CSV.
I wasn't sure how to write column 6 and then columns 9-2844.
included_cols = [6] + list(range(9,2844))
This works because you can add two lists together. Note that in Python3, range doesn't return a list, so we have to coerce it.
I wasn't sure how to do the recoding on the fly
content = list((1 if row[i] > 0 else 0) for i in included_cols)
This works because of the conditional expression: 1 if row[i] > 0 else 0. The general form A if cond else B evaluates to either A or B, depending upon the condition.
Another form, which I think is "too clever by half" is content = list((row[i] and 1) for i in included_cols). This works because the and operator always returns one or the other of its inputs.
This should work:
import csv
outputFile = open('output.csv', 'wb')
outputWriter = csv.writer(outputFile)
included_cols = [5] + range(8,2844) # you can just merge two lists
with open('source.txt', 'rb') as f:
reader = csv.reader(f, delimiter='\t')
outputWriter.writerow(reader[0]) # write header row unchanged
for row in reader[1:]: # skip header row
content = [int(row[i]) if i == 5 else (0 if int(row[i]) == 0 else 1) for i in included_cols]
outputWriter.writerow(content)

Remove row from CSV that contains empty cell using Python

I am splitting a CSV file based on a column with dates into separate files. However, some rows do contain a date but the others cells are empty. I want to remove these rows that contain empty cells from the CSV. But I'm not sure how to do this.
Here's is my code:
csv.field_size_limit(sys.maxsize)
with open(main_file, "r") as fp:
root = csv.reader(fp, delimiter='\t', quotechar='"')
result = collections.defaultdict(list)
next(root)
for row in root:
year = row[0].split("-")[0]
result[year].append(row)
for i,j in result.items():
row_count = sum(1 for row in j)
print(row_count)
file_path = "%s%s-%s.csv"%(src_path, i, row_count)
with open(file_path, 'w') as fp:
writer = csv.writer(fp, delimiter='\t', quotechar='"')
writer.writerows(j)
Pandas is perfect for this, especially if you want this to be easily adjusted to, say, other file formats. Of course one could consider it an overkill.
To just remove rows with empty cells:
>>> import pandas as pd
>>> data = pd.read_csv('example.csv', sep='\t')
>>> print data
A B C
0 1 2 5
1 NaN 1 9
2 3 4 4
>>> data.dropna()
A B C
0 1 2 5
2 3 4 4
>>> data.dropna().to_csv('example_clean.csv')
I leave performing the splitting and saving into separate files using pandas as an exercise to start learning this great package if you want :)
This would skip all all rows with at least one empty cell:
with open(main_file, "r") as fp:
....
for row in root:
if not all(map(len, row)):
continue
Pandas is Best in Python for handling any type of data processing.For help you can go through on link :- http://pandas.pydata.org/pandas-docs/stable/10min.html

How to select every Nth row in CSV file using python

I have a CSV file with hundreds of rows, and I would like to select and export every 3 rows to a new CSV file with the new output CSV file being named after the first row of the selection.
For example in the following CSV file....
1980 10 12
1 2 3 4 5 6 7
4 6 8 1 0 8 6
1981 10 12
2 4 9 7 5 4 1
8 9 3 8 3 7 3
I would like to select the first 3 rows and export to a new CSV named "1980 10 12" based on the first row then select the next 3 rows and export to a new CSV named "1981 10 12" based on the first row of the next 3 rows. I would like to do this using python.
Using the csv module, plus itertools.islice() to select 3 rows each time:
import csv
import os.path
from itertools import islice
with open(inputfilename, 'rb') as infh:
reader = csv.reader(infh)
for row in reader:
filename = row[0].replace(' ', '_') + '.csv')
filename = os.path.join(directory, filename)
with open(filename, 'wb') as outfh:
writer = csv.writer(outfh)
writer.writerow(row)
writer.writerows(islice(reader, 2))
The writer.writerows(islice(reader, 2)) line takes the next 2 rows from the reader, copying them across to the writer CSV, after writing the current row (with the date) to the output file first.
You may need to adjust the delimiter argument for the csv.reader() and csv.writer() objects; the default is a comma, but you didn't specify the exact format and perhaps you need to set it to a '\t' tab instead.
If you are using Python 3, open the files with 'r' and 'w' text mode, and set newline='' for both; open(inputfilename, 'r', newline='') and open(filename, 'w', newline='').
import csv
with open("in.csv") as f:
reader = csv.reader(f)
chunks = []
for ind, row in enumerate(reader, 1):
chunks.append(row)
if ind % 3 == 0: # if we have three new rows, create a file using the first row as the name
with open("{}.csv".format(chunks[0][0].strip(), "w") as f1:
wr = csv.writer(f1)
wr.writerows(chunks) # write all rows
chunks = [] # reset chunks to an empty list
Using slight iterator trickery:
with open('in.csv', 'r') as infh:
for block in zip(*[infh]*3):
filename = block[0].strip() + '.csv'
with open(filename, 'w') as outfh:
outfh.writelines(block)
On Python 2.X you would use itertools.izip. The docs actually mention izip(*[iter(s)]*n) as an idiom for clustering a data series.

How to read and write a table / matrix to file with python?

I'm trying to create a program that takes data and puts it in a 2 by 10 table of just numbers in a text file. Then the program needs to retrieve this information in later iterations. But I have no idea how to do this. I've been looking at numpty commands, regular file commands, and ways to try and make a table. But I can't seem to get any of this to work.
Here is an example of the table I am trying to make:
0 1 1 1 0 9 6 5
5 2 7 2 1 1 1 0
Then I would retrieve these values. What is a good way to do this?
Why not use the csv module?
table = [[1,2,3],[4,5,6]]
import csv
# write it
with open('test_file.csv', 'w') as csvfile:
writer = csv.writer(csvfile)
[writer.writerow(r) for r in table]
# read it
with open('test_file.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
table = [[int(e) for e in r] for r in reader]
This approach has the added benefit of making files that are readable by other programs, like Excel.
Heck, if you really need it space or tab-delimited, just add delimiter="\t" to your reader and writer construction.
numpy should be enough
table = np.loadtxt(filename)
this will have shape (2,10). If you want it transposed, just add a .T just after the closed bracket
to handle the lines one-by-one:
with open('filename') as f:
for ln in f:
a = [int(x) for x in ln.split()]
or, to generate a two-dimensional array:
with open('filename') as f:
a = [[int(x) for x in ln.split()] for ln in f]
Thanks Ord and Francesco Montesano for the comments

Categories

Resources