Writing Data to csv file to a single row - python

I need to write data to a single row and column by where data is coming from the serial port. So I used to read and write row by row.
But the requirement is I need to write data to the next column of a single row In PYTHON.
I need to have filed at the end like this
1,2,33,43343,4555,344323
In such a way all data is to be in a single row and multiple columns, and not in one column and multiple rows.
So this writes data in one after one row.
1
12
2222
3234
1233
131
but I want
1 , 12 , 2222 , 3234 , 1233 , 131
like every single row and multiple columns.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
while True:
test = 0
if test == 5 or time.time() > timeout:
break
ss=ser.readline()
print ss
s=ss.replace("\n","")
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'ab') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow([ s ])
csvfile.close()
time.sleep(0.02)

The csv module writes rows - every time you call writerow a newline is written and a new row is started. So, you can't call it multiple times and expect to get columns. You can, however, collect the data into a list and then write that list when you are done. The csv module is overkill for this.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
data = []
for _ in range(5):
data.append(ser.readline().strip())
time.sleep(0.02)
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'ab') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow(data)
# csv is overkill here unless the data itself contains commas
# that need to be escaped. You could do this instead.
# csvfile.write(','.join(data) + '\n')
UPDATE
One of the tricks to question writing here to to supply a short, runnable example of the problem. That way, everybody runs the same thing and you can talk about whats wrong in terms of the code and output everyone can play with.
Here is the program updated with mock data. I changed the open to "wb" so that the file is deleted if it already exists when the program runs. Run it and let me know how its results different from what you want.
import csv
import time
filename = 'deleteme.csv'
test_row = '1,2,33,43343,4555,344323'
test_data = test_row.split(',')
data = []
for _ in range(6):
data.append(test_data.pop(0).strip())
time.sleep(0.02)
with open(filename, 'wb') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow(data)
print repr(open(filename).read())
assert open(filename).read().strip() == test_row, 'got one row'

Assuming your serial port data wouldn't overrun your main memory heap, the following would be the code that would suit your need.
import serial
import time
import csv
ser = serial.Serial('COM29', 57600)
timeout = time.time() + 60/6 # 5 minutes from now
result = []
while True:
test = 0
if test == 5 or time.time() > timeout:
break
ss=ser.readline()
print(ss)
s=ss.replace("\n","")
result.append(s)
time.sleep(0.02)
with open('C:\Users\Ivory Power\Desktop\EEG_Data\Othr_eeg\egg31.csv', 'w') as csvfile:
spamwriter = csv.writer(csvfile,delimiter=',', lineterminator='\n')
spamwriter.writerow([result])
csvfile.close()

Related

Read input data to csv file

I am doing a BeagleBone project that measures temperature from a sensor. I want to save the data into a CSV file. I am new to programming and Python and I am little lost. I have been google searching for a while now and found little that can help me. So I wanted to ask instead about this. How to I get new data written into the CSV file every second?
import Adafruit_BBIO.ADC as ADC
import time
import csv
sensor_pin = 'P9_40'
ADC.setup()
while True:
reading = ADC.read(sensor_pin)
millivolts = reading * 1800 # 1.8V reference = 1800 mV
temp_c = (millivolts - 500) / 10
temp_f = (temp_c * 9/5) + 32
print('mv=%d C=%d F=%d' % (millivolts, temp_c, temp_f))
time.sleep(1)
# field names
fields = ['Milivolts', 'Celsius']
# data rows of csv file
rows = [ [millivolts,"|", temp_c]]
# name of csv file
filename = "textfile.csv"
# writing to csv file
with open(filename, 'w') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the fields
csvwriter.writerow(fields)
# writing the data rows
csvwriter.writerows(rows)
One fix to apply is to open the file in append mode so that the file content is not overwritten at every step; just change the 'w' to 'a' in this line:
with open(filename, 'a') as csvfile:
Please note that without any output and/or description of the problem you are encountering, it will be difficult to help you more.
Honestly, i don't see much wrong with the code other than the order.
The way you wrote it, every single time you are iterating the loop, you are opening the file in write mode, which erases the file. So if im guessing right, you probably only have one row in the CSV.
The below is just a re-ordering and it should work. Note that i put the fields in before the loop because you only want it once.
Keep in mind that every time you run the program it will start a fresh csv. If you want it to keep a history regardless of interrupts/ restarts, just remove the fields, and use open(filename, 'a') instead.
Since this is data stuff, if you are going for a long time, you might want to include the time.time() as part of each row. That way you can see downtime ect.
import Adafruit_BBIO.ADC as ADC
import time
import csv
sensor_pin = 'P9_40'
ADC.setup()
# name of csv file
filename = "textfile.csv"
with open(filename, 'w') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the fields
csvwriter.writerow(['Milivolts', 'Celsius'])
while True:
reading = ADC.read(sensor_pin)
millivolts = reading * 1800 # 1.8V reference = 1800 mV
temp_c = (millivolts - 500) / 10
temp_f = (temp_c * 9/5) + 32
# writing the data rows
rows = [ [millivolts,"|", temp_c]]
csvwriter.writerows(rows)
time.sleep(1)

Reshaping CSV columns in python

I have my data in this form
and the required form of data is
Can anybody help me in this regard?
The content of the initial CSV file as text is:
var1,var2,col1,col2,col3
a,f,1,2,3
b,g,4,5,6
c,h,7,8,9
d,i,10,11,12
You can do it directly with the csv module. You just read from the initial file, and write up to 3 rows per initial row into the resulting file:
with open('in.csv') as fdin, open('out.csv', 'w', newline='') as fdout:
rd = csv.reader(fdin)
wr = csv.writer(fdout)
header = next(rd) # read and process header
_ = wr.writerow(header[:2] + ['columns',''])
for row in rd: # loop on rows
for i in range(3): # loop on the 3 columns
try:
row2 = row[:2] + ['col{}'.format(i+1), row[2 + i]]
_ = wr.writerow(row2)
except IndexError: # prevent error on shorter line
break
If you intend to do heavy data processing, you should contemplate using the Pandas module.
With the data sample, it gives:
var1,var2,columns,
a,f,col1,1
a,f,col2,2
a,f,col3,3
b,g,col1,4
b,g,col2,5
b,g,col3,6
c,h,col1,7
c,h,col2,8
c,h,col3,9
d,i,col1,10
d,i,col2,11
d,i,col3,12

Writing data to new row when every time it updates

So i am trying to get this done, the script executes every 5 minutes and the variable (a & b) changes with it, first time the script executes the file write the value but next time when it runs it overwrites the previous data.I want it to write the value of (a & b) in next row with overwriting the previous data.
Tried using newline='' but got error.
import csv
a = 1
b = 4
#newline=''
with open('data.csv', mode='w') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
Is there any easy fix where i can achieve it fast
data it has:
A B
1 6
Result i want when every time it runs:
A B
1 6
4 7
6 2
3 9
Any help would be appreciated
import csv
a = 1
b = 4
#newline=''
with open('data.csv', mode='a') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
just change mode='a' w means write , a means append, use append instead of write.
Check out the doc regarding file opening modes: https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files
In summary w mode will always (over)write. You want to use a to append new data to your file.
In your code you will need to check whether file exist, then choose the appropriate mode.
Open your file in a+ mode. This means appending. w stands for write which means each time the document is opened the values are overwritten. Add time.sleep(300) to have time intervals of 5 minutes.
import csv
import time
a = 1
b = 4
while True:
with open("data.csv", 'a+', newline='') as data:
datawriter = csv.writer(data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
datawriter.writerow([a,b])
print("Round Done")
time.sleep(300)

How to continuously collect data and save to a file every 5 second

I am collecting data from a sensor with 200ms sampling rate. I need to collect and take average of its signal strength which can be retrieved from the received data. Currently I can collect and save data every minute. However, I need to shorten the time so that I can get more real-time average value.
Here is what my code look like.
count = 0
record = []
while (str(datetime.datetime.now().second)!="59"):
ser_bytes = ser.readline().decode('utf-8')[:-1].rstrip() # Read the newest output
if ser_bytes:
arr = ser_bytes.split(':')
i=0
db=[]
with open("test_data.csv","a") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow([ser_bytes])
ser_bytes = ser.readline().decode('utf-8')[:-1].rstrip() # Read the newest output
if ser_bytes:
arr = ser_bytes.split(':')
i=0
db=[]
with open("test_data.csv","a") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow([ser_bytes])
start_time = str(datetime.datetime.now().hour) + "_" + str(datetime.datetime.now().minute) + "_" + str(datetime.datetime.now().second)
with open(pi_dir1+r_ID+start_time+".csv","w") as correct:
writer = csv.writer(correct, dialect = 'excel')
with open('test_data.csv', 'r', encoding='utf-8') as mycsv:
reader = csv.reader((line.replace('\x00','') for line in mycsv))
try:
for i, row in enumerate(reader):
writer.writerow(row)
except csv.Error:
print(r_ID+' csv choked on line %s' % (i+1))
raise
os.remove("test_data.csv")
soc.sendall(str(r_ID).encode("utf8"))
time.sleep(5)
Here is what inside the created csv data which is saved using the recording time information as its file name (e.g. 12_12_59)
Collected data
Then, I want to shorted the duration from 1 minute long into 5 seconds. First, I tried to reduce into 15 seconds, but it seems that I couldn't implement time.sleep(15). Because when I do that, it will only capture one line without continuously record data as the sampling rate.
ser_bytes = ser.readline().decode('utf-8')[:-1].rstrip() # Read the newest output
if ser_bytes:
arr = ser_bytes.split(':')
i=0
db=[]
with open("test_data.csv","a") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow([ser_bytes])
start_time = str(datetime.datetime.now().hour) + "_" + str(datetime.datetime.now().minute) + "_" + str(datetime.datetime.now().second)
with open(pi_dir1+r_ID+start_time+".csv","w") as correct:
writer = csv.writer(correct, dialect = 'excel')
with open('test_data.csv', 'r', encoding='utf-8') as mycsv:
reader = csv.reader((line.replace('\x00','') for line in mycsv))
try:
for i, row in enumerate(reader):
writer.writerow(row)
except csv.Error:
print(r_ID+' csv choked on line %s' % (i+1))
raise
os.remove("test_data.csv")
soc.sendall(str(r_ID).encode("utf8"))
time.sleep(15)
Here's what it would appear when I use time.sleep(15)
captured data with time.sleep(15)
Moreover, to process the program, the time was delayed so my data was not recorded in exact duration that I want. The created file received by the server looks like this: delayed process
Please kindly help me by giving suggestion in solving this problem. Thank you.
If your data is in table format, then I suggest that you think in terms of database tables. There might me a little of a learning curve initially, but try using something simple like SQLite, or any other DB framework of your choice.
Then you can have either a shorter sleep time or multiple processes running and fetching data from overlapping timeframes.
Databases are fast, and later you can export your readings from DB table to CSV, or any other format you prefer.

Parsing column from CSV and replace a value in text file with the new value

I have one CSV file, and I want to extract the first column of it. My CSV file is like this:
Device ID;SysName;Entry address(es);IPv4 address;Platform;Interface;Port ID (outgoing port);Holdtime
PE1-PCS-RANCAGUA;;;192.168.203.153;cisco CISCO7606 Capabilities Router Switch IGMP;TenGigE0/5/0/1;TenGigabitEthernet3/3;128 sec
P2-CORE-VALPO.cisco.com;P2-CORE-VALPO.cisco.com;;200.72.146.220;cisco CRS Capabilities Router;TenGigE0/5/0/0;TenGigE0/5/0/4;128 sec
PE2-CONCE;;;172.31.232.42;Cisco 7204VXR Capabilities Router;GigabitEthernet0/0/0/14;GigabitEthernet0/3;153 sec
P1-CORE-CRS-CNT.entel.cl;P1-CORE-CRS-CNT.entel.cl;;200.72.146.49;cisco CRS Capabilities Router;TenGigE0/5/0/0;TenGigE0/1/0/6;164 sec
For that purpose I use the following code that I saw here:
import csv
makes = []
with open('csvoutput/topologia.csv', 'rb') as f:
reader = csv.reader(f)
# next(reader) # Ignore first row
for row in reader:
makes.append(row[0])
print makes
Then I want to replace into a textfile a particular value for each one of the values of the first column and save it as a new file.
Original textfile:
PLANNED.IMPACTO_ID = IMPACTO.ID AND
PLANNED.ESTADOS_ID = ESTADOS_PLANNED.ID AND
TP_CLASIFICACION.ID = TP_DATA.ID_TP_CLASIFICACION AND
TP_DATA.PLANNED_ID = PLANNED.ID AND
PLANNED.FECHA_FIN >= CURDATE() - INTERVAL 1 DAY AND
PLANNED.DESCRIPCION LIKE '%P1-CORE-CHILLAN%’;
Expected output:
PLANNED.IMPACTO_ID = IMPACTO.ID AND
PLANNED.ESTADOS_ID = ESTADOS_PLANNED.ID AND
TP_CLASIFICACION.ID = TP_DATA.ID_TP_CLASIFICACION AND
TP_DATA.PLANNED_ID = PLANNED.ID AND
PLANNED.FECHA_FIN >= CURDATE() - INTERVAL 1 DAY AND
PLANNED.DESCRIPCION LIKE 'FIRST_COLUMN_VALUE’;
And so on for every value in the first column, and save it as a separate file.
How can I do this? Thank you very much for your help.
You could just read the file, apply changes, and write the file back again. There is no efficient way to edit a file (inserting characters is not efficiently possible), you can only rewrite it.
If your file is going to be big, you should not keep the whole table in memory.
import csv
makes = []
with open('csvoutput/topologia.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
makes.append(row)
# Apply changes in makes
with open('csvoutput/topologia.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(makes);

Categories

Resources