help please write data horizontally in csv-file.
the following code writes this:
import csv
with open('some.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows('jgjh')
but I need so
The csv module deals in sequences; use writer.writerow() (singular) and give it a list of one column:
writer.writerow(['jgjh'])
The .writerow() method will take each element in the sequence you give it and make it one column. Best to give it a list of columns, and ['jgjh'] makes one such column.
.writerows() (plural) expects a sequence of such rows; for your example you'd have to wrap the one row into another list to make it a series of rows, so writer.writerows([['jgjh']]) would also achieve what you want.
Related
a university task has us constructing a program that reads CSV files and prepares them for analysis. In a previous question, we constructed a function to open and read certain columns within an input CSV file. Here is the code that was written for such a question:
import csv
import numpy
def load_metrics(filename):
"""Loads data from csv files"""
data = []
with open(filename) as csv_file:
csv_reader = csv.reader(csv_file)
for row in csv_reader:
data.append(row[0:2] + row[7:14])
return numpy.array(data)
However, this next question has me stumped, especially with mentions of unicode. Any idea how I should tackle this question? I believe I should start by removing the header row, but am unsure where to go from there. Thank you.
The NumPy array you created from task 1 is unstructured because we let NumPy decide what the datatype for each value should be. Also, it contains the header row that is not necessary for the analysis. Typically, it contains float values, with some description columns like created_at etc. So, we are going to remove the header row, and we are also going to explicitly tell NumPy to convert all columns to type float (i.e., "float") apart from columns specified by indexes, which should be Unicode of length 30 characters (i.e., "<U30"). Finally, every row is converted as a type tuple (e.g., tuple(i) for i in data).
Write a function unstructured_to_structured(data, indexes) that achieves the above goal.
def deletedata(uniquecode):
with open('Stallingsbestand.csv', 'r+') as CSV:
writer = csv.writer(CSV, delimiter=';')
for row in CSV:
if uniquecode in row:
writer.writerow((uniquecode, ''))
in Stallingsbestand.csv consists of rows that look like this:
uniquecode;Date_of_last_opening_a_function
I want to be able to delete the date of last opening and just have the unique code there.
(appending False at the end of the row can work too but I don't know which is easier)
I thought that just overwriting the row would be the easiest but I can't get it to work. Is there anyone who knows how to make this work?
You want to rename the file to Stallingsbestand.old, and write out a new version of Stallingsbestand.csv. One way to do this is to copy (sometimes) modified rows from a csv.reader to a csv.writer within a loop, similar to your current code.
You might find it more convenient to create an in-memory dataframe with pandas.read_csv(), mutate one of its rows, and then persist it with pandas.to_csv().
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html
At the moment, I know how to write a list to one row in csv, but when there're multiple rows, they are still written in one row. What I would like to do is write the first list in the first row of csv, and the second list to the second row.
Code:
for i in range(10):
final=[i*1, i*2, i*3]
with open ('0514test.csv', 'a') as file:
file.write(','.join(str(i) for i in kk))
You may want to add linebreak to every ending of row. You can do so by typing for example:
file.write('\n')
The csv module from standard library provides objects to read and write CSV files. In your case, you could do:
import csv
for i in range(10):
final = [i*1, i*2, i*3]
with open("0514test.csv", "a", newline="") as file:
writer = csv.writer(file)
writer.writerow(final)
Using this module is often safer in real life situations because it takes care of all the CSV machinery (adding delimiters like " or ', managing the cases in which your separator is also present in your data etc.)
The Problem
I have a CSV file that contains a large number of items.
The first column can contain either an IP address or random garbage. The only other column I care about is the fourth one.
I have written the below snippet of code in an attempt to check if the first column is an IP address and, if so, write that and the contents of the fourth column to another CSV file side by side.
with open('results.csv','r') as csvresults:
filecontent = csv.reader(csvresults)
output = open('formatted_results.csv','w')
processedcontent = csv.writer(output)
for row in filecontent:
first = str(row[0])
fourth = str(row[3])
if re.match('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}', first) != None:
processedcontent.writerow(["{},{}".format(first,fourth)])
else:
continue
output.close()
This works to an extent. However, when viewing in Excel, both items are placed in a single cell rather than two adjacent ones. If I open it in notepad I can see that each line is wrapped in quotation marks. If these are removed Excel will display the columns properly.
Example Input
1.2.3.4,rubbish1,rubbish2,reallyimportantdata
Desired Output
1.2.3.4 reallyimportantdata - two separate columns
Actual Output
"1.2.3.4,reallyimportantdata" - single column
The Question
Is there any way to fudge the format part to not write out with quotations? Alternatively, what would be the best way to achieve what I'm trying to do?
I've tried writing out to another file and stripping the lines but, despite not throwing any errors, the result was the same...
writerow() takes a list of elements and writes each of those into a column. Since you are feeding a list with only one element, it is being placed into one column.
Instead, feed writerow() a list:
processedcontent.writerow([first,fourth])
Have you considered using Pandas?
import pandas as pd
df = pd.read_csv("myFile.csv", header=0, low_memory=False, index_col=None)
fid = open("outputp.csv","w")
for index, row in df.iterrows():
aa=re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$",row['IP'])
if aa:
tline = '{0},{1}'.format(row['IP'], row['fourth column'])
fid.write(tline)
output.close()
There may be an error or two and I got the regex from here.
This assumes the first row of the csv has titles which can be referenced. If it does not then you can use header = None and reference the columns with iloc
Come to think of it you could probably run the regex on the dataFrame, copy the first and fourth column to a new dataFrame and use the to_csv method in pandas.
I'm trying to use it to manipulate data in large txt-files.
I have a txt-file with more than 2000 columns, and about a third of these have a title which contains the word 'Net'. I want to extract only these columns and write them to a new txt file. Any suggestion on how I can do that?
I have searched around a bit but haven't been able to find something that helps me. Apologies if similar questions have been asked and solved before.
EDIT 1: Thank you all! At the moment of writing 3 users have suggested solutions and they all work really well. I honestly didn't think people would answer so I didn't check for a day or two, and was happily surprised by this. I'm very impressed.
EDIT 2: I've added a picture that shows what a part of the original txt-file can look like, in case it will help anyone in the future:
One way of doing this, without the installation of third-party modules like numpy/pandas, is as follows. Given an input file, called "input.csv" like this:
a,b,c_net,d,e_net
0,0,1,0,1
0,0,1,0,1
(remove the blank lines in between, they are just for formatting the
content in this post)
The following code does what you want.
import csv
input_filename = 'input.csv'
output_filename = 'output.csv'
# Instantiate a CSV reader, check if you have the appropriate delimiter
reader = csv.reader(open(input_filename), delimiter=',')
# Get the first row (assuming this row contains the header)
input_header = reader.next()
# Filter out the columns that you want to keep by storing the column
# index
columns_to_keep = []
for i, name in enumerate(input_header):
if 'net' in name:
columns_to_keep.append(i)
# Create a CSV writer to store the columns you want to keep
writer = csv.writer(open(output_filename, 'w'), delimiter=',')
# Construct the header of the output file
output_header = []
for column_index in columns_to_keep:
output_header.append(input_header[column_index])
# Write the header to the output file
writer.writerow(output_header)
# Iterate of the remainder of the input file, construct a row
# with columns you want to keep and write this row to the output file
for row in reader:
new_row = []
for column_index in columns_to_keep:
new_row.append(row[column_index])
writer.writerow(new_row)
Note that there is no error handling. There are at least two that should be handled. The first one is the check for the existence of the input file (hint: check the functionality provide by the os and os.path modules). The second one is to handle blank lines or lines with an inconsistent amount of columns.
This could be done for instance with Pandas,
import pandas as pd
df = pd.read_csv('path_to_file.txt', sep='\s+')
print(df.columns) # check that the columns are parsed correctly
selected_columns = [col for col in df.columns if "net" in col]
df_filtered = df[selected_columns]
df_filtered.to_csv('new_file.txt')
Of course, since we don't have the structure of your text file, you would have to adapt the arguments of read_csv to make this work in your case (see the the corresponding documentation).
This will load all the file in memory and then filter out the unnecessary columns. If your file is so large that it cannot be loaded in RAM at once, there is a way to load only specific columns with the usecols argument.
You can use pandas filter function to select few columns based on regex
data_filtered = data.filter(regex='net')