I have the following code:
logFile=open('c:\\temp\\mylogfile'+'.txt', 'w')
pprint.pprint(dataobject)
how can i send the contents of dataobject to the log file on the pretty print format ?
with open("yourlogfile.log", "w") as log_file:
pprint.pprint(dataobject, log_file)
See the documentation.
Please use pprint.pformat, which returns a formated string that can be dumped directly to file.
>>> import pprint
>>> with open("file_out.txt", "w") as fout:
... fout.write(pprint.pformat(vars(pprint)))
...
Reference:
http://docs.python.org/2/library/pprint.html
For Python 2.7
logFile = open('c:\\temp\\mylogfile'+'.txt', 'w')
pp = pprint.PrettyPrinter(indent=4, stream=logFile)
pp.pprint(dataobject) #you can reuse this pp.print
Related
e.g.
I have a dict.
>>> gtf['mykey1']
{'name': {'apple': '20', 'eat': ['Leo', 'Amy', 'Lily', 'Lucy']}
I want to save this output to a local file named out.txt.
How should I do this?
I tried
%store gtf['mykey1'] > out.txt
which did not work.
Thanks.
Did you try just to open a file and write to it:
import json
output = open('output.txt', 'w')
stringified_entry = json.dumps(gtf['mykey1'])
output.write(stringified_entry)
output.close()
Following also works but I find it less convenient to use in a prompt:
with open('output.txt') as output:
output.write(gtf['mykey1'])
Try this:
with open("output.txt","w+") as file:
file.write(gtf['mykey1'])
You need to stringify a dict to save it to a txt in python, you CANNOT save a dict as is, without first making it either a string to save it into .txt format, or json.dump(dict_) it to save it as a json.
This answers your question:
import json
stringified_json = json.dumps(gtf['mykey1'])
output = open('output.txt', 'w')
output.write(stringified_json)
output.close()
json.dumps takes a dict and makes it a string.
You can later load it back to a json with:
import json
output = open('output.txt', 'r')
stringified_json = output.read()
stringified_json = json.loads(stringified_json)
output.close()
I have downloaded a compressed json file and want to open it as a dictionary.
I used json.load but the data type still gives me a string.
I want to extract a keyword list from the json file. Is there a way I can do it even though my data is a string?
Here is my code:
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "r") as f:
data = f.read()
with open('association.json', 'w') as json_file:
json.dump(data.decode('utf-8'), json_file)
with open("association.json", "r") as read_it:
association_data = json.load(read_it)
print(type(association_data))
#The actual output is 'str' but I expect it is 'dic'
In the first with block you already got the uncompressed string, no need to open it a second time.
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "r") as f:
data = f.read()
j = json.loads (data.decode('utf-8'))
print (type(j))
Open the file using the gzip package from the standard library (docs), then read it directly into json.loads():
import gzip
import json
with gzip.open("19.04_association_data.json.gz", "rb") as f:
data = json.loads(f.read(), encoding="utf-8")
To read from a json.gz, you can use the following snippet:
import json
import gzip
with gzip.open("file_path_to_read", "rt") as f:
expected_dict = json.load(f)
The result is of type dict.
In case if you want to write to a json.gz, you can use the following snippet:
import json
import gzip
with gzip.open("file_path_to_write", "wt") as f:
json.dump(expected_dict, f)
I use python2.7 and i have a question about reading from tempfile. Here is my code:
import tempfile
for i in range(0,10):
f = tempfile.NamedTemporaryFile()
f.write("Hello")
##f.seek(0)
print f.read()
With this code , i get something like this:
Rワ
nize.pyR
゙`Sc
d
Rワ
Rワ
Z
Z
nize.pyR
゙`Sc
what are these?
Thanks!
You are writing string to a file opened in bytes mode. Add the mode parameter to your call to NamedTemporaryFile:
f = tempfile.NamedTemporaryFile("w")
See https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files
I'm basically writing text to a file for example
data = ("save.data", "a+")
data.write(u"name = 'zrman'")
I wan't to be able to load that file and allow me to do this in python
print name
Any help would be great
-Thx
Use ConfigParser Python module. It's exactely what you're looking for.
Write :
import ConfigParser
config = ConfigParser.RawConfigParser()
config.set('main', 'name', 'zrman')
with open('conf.ini', 'wb') as configfile:
config.write(configfile)
Read :
from ConfigParser import ConfigParser
config = ConfigParser()
config.read('conf.ini')
print config.sections()
# ['main']
print config.items('main')
# [('name', 'zrman')]
Take a look at the docs here, this will walk you through the process of reading and writing files in python.
You could read in the lines and then exec the code:
f = open('workfile', 'w')
for line in f:
exec(line)
print name
I have the following code:
import re
#open the xml file for reading:
file = open('path/test.xml','r+')
#convert to string:
data = file.read()
file.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data))
file.close()
where I'd like to replace the old content that's in the file with the new content. However, when I execute my code, the file "test.xml" is appended, i.e. I have the old content follwed by the new "replaced" content. What can I do in order to delete the old stuff and only keep the new?
You need seek to the beginning of the file before writing and then use file.truncate() if you want to do inplace replace:
import re
myfile = "path/test.xml"
with open(myfile, "r+") as f:
data = f.read()
f.seek(0)
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data))
f.truncate()
The other way is to read the file then open it again with open(myfile, 'w'):
with open(myfile, "r") as f:
data = f.read()
with open(myfile, "w") as f:
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>", r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", data))
Neither truncate nor open(..., 'w') will change the inode number of the file (I tested twice, once with Ubuntu 12.04 NFS and once with ext4).
By the way, this is not really related to Python. The interpreter calls the corresponding low level API. The method truncate() works the same in the C programming language: See http://man7.org/linux/man-pages/man2/truncate.2.html
file='path/test.xml'
with open(file, 'w') as filetowrite:
filetowrite.write('new content')
Open the file in 'w' mode, you will be able to replace its current text save the file with new contents.
Using truncate(), the solution could be
import re
#open the xml file for reading:
with open('path/test.xml','r+') as f:
#convert to string:
data = f.read()
f.seek(0)
f.write(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>",data))
f.truncate()
import os#must import this library
if os.path.exists('TwitterDB.csv'):
os.remove('TwitterDB.csv') #this deletes the file
else:
print("The file does not exist")#add this to prevent errors
I had a similar problem, and instead of overwriting my existing file using the different 'modes', I just deleted the file before using it again, so that it would be as if I was appending to a new file on each run of my code.
See from How to Replace String in File works in a simple way and is an answer that works with replace
fin = open("data.txt", "rt")
fout = open("out.txt", "wt")
for line in fin:
fout.write(line.replace('pyton', 'python'))
fin.close()
fout.close()
in my case the following code did the trick
with open("output.json", "w+") as outfile: #using w+ mode to create file if it not exists. and overwrite the existing content
json.dump(result_plot, outfile)
Using python3 pathlib library:
import re
from pathlib import Path
import shutil
shutil.copy2("/tmp/test.xml", "/tmp/test.xml.bak") # create backup
filepath = Path("/tmp/test.xml")
content = filepath.read_text()
filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content))
Similar method using different approach to backups:
from pathlib import Path
filepath = Path("/tmp/test.xml")
filepath.rename(filepath.with_suffix('.bak')) # different approach to backups
content = filepath.read_text()
filepath.write_text(re.sub(r"<string>ABC</string>(\s+)<string>(.*)</string>",r"<xyz>ABC</xyz>\1<xyz>\2</xyz>", content))