Is there a way to just open/create filehandle = open( "example.bin", "wb") and extend this file with an existing file?
I think about something like the .extend from function for lists
Like so:
filehandle = open( "example.bin", "wb")
filehande.extend(existing_file.bin)
I know that i can read the existing file and write that to a variable/list and "paste" it in the new file but im curious if there is an easier option like this...
with open('original', 'a') as out_file, open('other', 'r') as ins_file:
out_file.write(ins_file.read())
This will append the contents of other onto original. If you're dealing with binary data you can change the mode on each to ab and rb.
If the contents of the file are large you can do it in chunks too.
You can't merge file objects. You can make a list of each file and extended them
files_combined = list(open("example.bin", "wb")) + list(open("file_2"))
Will return a list with all the lines in file_2 appended to file_1, but in a new list. You can then save it to a new file, or overwrite one of the files.
Related
I am trying to create a binary file (called textsnew) and then append two (previously created) binary files to it. When I print the resulting (textsnew), it only shows the first file appended to it, not the second one. I do however see that the size of the new file (textsnew) is the sum of the two appended files. Maybe Im opening it incorrectly? This is my code
with open("/path/textsnew", "ab") as myfile, open("/path/names", "rb") as file2:
myfile.write(file2.read())
with open("/path/textsnew", "ab") as myfile, open("/path/namesthree", "rb") as file2:
myfile.write(file2.read())
this code is for reading the file:
import pickle
infile1 = open('/path/textsnew','rb')
names1 = pickle.load(infile1)
print (names1)
Open the new file, write its data.
Then, while the new file is still open (in append mode), open the second file, read its data and immediately write that data to the first file.
Then repeat the procedure for the third file.
Everything in binary, of course, although it will work just as well with text files. Linux/Macos/*nix don't even really care.
This also assume that the built-in I/O buffer size will read the full file contents in one go, as in your question. Otherwise, you would need to create a loop around the read/write parts.
with open('/path/textsnew', 'ab') as fpout:
fpout.write(data)
with open('/path/names', 'rb') as fpin:
fpout.write(fpin.read())
with open('/path/namesthree', 'rb') as fpin:
fpout.write(fpin.read())
I have a csv file that consists of 3 elements per list. I'm trying to make the first element of each list a nested list of string elements that I can easily search through. I can't get my code to print out my results, and therefore I don't know if I've actually correctly rewritten the csv file.
with open('trump_tweets_proto.csv', 'w+') as file:
contents = csv.reader(file)
contents2 = csv.writer(file)
for row in contents:
for info in row:
contents2.writerow(row[0].split())
print(row[0])
You can't have a reader and writer object open on the same file at the same time. Or if you can, you certainly can't use both simultaneously and not mess up your file.
Open the file for reading (and be sure to look at the csv module documentation on how to open a csv file correctly, you're missing the newline parameter and probably should specify an encoding as well).
Open another (temporary) file for writing, and then copy the temp file over the old file. Or do everyting in memory with the first file, then open the file for writing and write the new data directly.
I have a txt file,
k1=np.linspace(0,1,10)
k2=np.linspace(0,2,10)
k3=np.linspace(0,3,10)
k4=np.linspace(0,4,10)
np.savetxt('k.txt',np.transpose([k1,k2,k3,k4]))
Now I wanna replace the second column (k2) to a new array
k5=np.linspace(0,5,10)
How to efficiently replace the column without reusing np.savetxt or a loop?
It's not easy to do without rewriting the entire file. Text files are simple, sequential method of data storage, without fixed structure, indexing, or random access search. So you'd have to implement all those features.
The common approach is to read the file, edit it, and overwrite the file with the data the way you want.
Another approach when the file is too big to fit entirely in memory is to create a new temporary file, and read the old file line by line in a loop, while you write to the new file. Then delete the old file and rename the new file to have the same name as the original.
If you need more than that, I suggest using a database. sqlite is included in python and can do what you request directly inside a file, really fast.
def change(tuple):
line, newk = tuple
oldk = line.split()
return "%s %.18e %s %s\n" % (oldk[0], newk, oldk[2], oldk[3])
filename = "k.txt"
lines = fileinput.input(filename)
tuples = zip(lines, k5)
newlines = list(map(change, tuples))
fo = open(filename, "r+")
fo.writelines(newlines)
Here the script creates a list of tuples(line, k) containing for each line the new value for k.
%.18e is the default format for the NumPy.savetext() method, so you need to adapt the format to same one used for creating your source file.
You don't need a loop. You can read the whole file into a list with readlines(), modify a specific element of the list, then write it back out to the file with writelines().
with open("filename.txt", "r+") as f:
lines = f.readlines()
lines[2] = "k5=np.linspace(0,5,10)\n"
f.seek(0)
f.writelines(lines)
f.truncate()
f.seek() goes back to the beginning of the file, so that f.writelines() overwrites it. Then we use f.truncate() to remove any excess content if the replacement is shorter than the original file contents.
Don't forget the \n in the new contents of lines[2]. writelines() doesn't put newlines between the strings in its argument.
I'm attempting to read a CSV file and then write the read CSV into another CSV file.
Here is my code so far:
import csv
with open ("mastertable.csv") as file:
for row in file:
print row
with open("table.csv", "w") as f:
f.write(file)
I eventually want to read a CSV file write to a new CSV with appended data.
I get this error when I try to run it.
Traceback (most recent call last):
File "readlines.py", line 8, in <module>
f.write(file)
TypeError: expected a character buffer object
From what I understood it seems that I have to close the file, but I thought with automatically closed it?
I'm not sure why I can write a string to text but I can't simply write a CSV to another CSV almost like just making a copy by iterating over it.
To read in a CSV and write to a different one, you might do something like this:
with open("table.csv", "w") as f:
with open ("mastertable.csv") as file:
for row in file:
f.write(row)
But I would only do that if the rows needed to be edited while transcribed. For the described use case, you can simply copy it with shutil before hand then opening it to append to it. This method will be much faster, not to mention far more readable.
The with operator will handle file closing for you, and will close the file when you leave that block of code (given by the indentation level)
It looks like you intend to make use of the Python csv module. The following should be a good starting point for what you are trying to acheive:
import csv
with open("mastertable.csv", "r") as file_input, open("table.csv", "wb") as file_output:
csv_input = csv.reader(file_input)
csv_output = csv.writer(file_output)
for cols in csv_input:
cols.append("more data")
csv_output.writerow(cols)
This will read mastertable.csv file in a line at a time as a list of columns. I append an extra column, and then write each line to table.csv.
Note, when you leave the scope of a with statement, the file is automatically closed.
The file variable is not really actual file data but it is a refernce pointer which is used to read data. When you do the following:
with open ("mastertable.csv") as file:
for row in file:
print row
file pointer get closed automatically. The write method expects a character buffer or a string as the input not a file pointer.
If you just want to copy data, you can do something like this:
data = ""
with open ("mastertable.csv","r") as file:
data = file.read()
with open ("table.csv","a") as file:
file.write(data)`
how can I clear a complete csv file with python. Most forum entries that cover the issue of deleting row/columns basically say, write the stuff you want to keep into a new file. I need to completely clear a file - how can I do that?
Basically you want to truncate the file, this can be any file. In this case it's a csv file so:
filename = "filewithcontents.csv"
# opening the file with w+ mode truncates the file
f = open(filename, "w+")
f.close()
Your question is rather strange, but I'll interpret it literally. Clearing a file is not the same as deleting it.
You want to open a file object to the CSV file, and then truncate the file, bringing it to zero length.
f = open("filename.csv", "w")
f.truncate()
f.close()
If you want to delete it instead, that's just a os filesystem call:
import os
os.remove("filename.csv")
The Python csv module is only for reading and writing whole CSV files but not for manipulating them. If you need to filter data from file then you have to read it, create a new csv file and write the filtered rows back to new file.