Since the Json And Pickle methods aren't working out, i've decided to save my dictionaries as strings, and that works, but they arent being read.
I.E
Dictionary
a={'name': 'joe'}
Save:
file = open("save.txt", "w")
file.write(str(a))
file.close()
And that works.
But my load method doesn't read it.
Load:
f = open("save.txt", "r")
a = f
f.close()
So, it just doesn't become f.
I really don't want to use json or pickle, is there any way I could get this method working?
First, you're not actually reading anything from the file (the file is not its contents). Second, when you fix that, you're going to get a string and need to transform that into a dictonary.
Fortunately both are straightforward to address....
from ast import literal_eval
with open("save.txt") as infile:
data = literal_eval(infile.read())
Related
This is my first question here, I'm new to python and trying to figure some things out to set up an automatic 3D model processing chain that relies on data being stored in JSON files moving from one server to another.
The problem is that I need to store absolute paths to files that are being processed, but these absolute paths should be modified in the original JSON files upon the first time that they are processed.
Basically the JSON file comes in like this:
{
"normaldir": "D:\\Outgoing\\1621_1\\",
"projectdir": "D:\\Outgoing\\1622_2\\"
}
And I would like to rename the file paths to
{
"normaldir": "X:\\Incoming\\1621_1\\",
"projectdir": "X:\\Incoming\\1622_2\\",
}
What I've been trying to do is replace the first part of the path using this code, but it isn't working:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'r+') as file:
content = file.read()
file.seek(0)
content.replace("D:\\Outgoing\\", "X:\\Incoming\\")
file.write(content)
However this was not working at all, so I tried interpreting the JSON file properly and replacing the key code from here:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'r+') as settingsData:
settings = json.load(settingsData)
settings['normaldir'] = 'X:\\Incoming\\1621_1\\'
settings['projectdir'] = 'X:\\Incoming\\1622_2\\'
settingsData.seek(0) # rewind to beginning of file
settingsData.write(json.dumps(settings,indent=2,sort_keys=True)) #write the updated version
settingsData.truncate() #truncate the remainder of the data in the file
This works perfectly, however I'm replacing the whole path so it won't really work for every JSON file that I need to process. What I would really like to do is to take a JSON key corresponding to a file path, keep the last 8 characters and replace the rest of the patch with a new string, but I can't figure out how to do this using json in python, as far as I can tell I can't edit part of a key.
Does anyone have a workaround for this?
Thanks!
Your replace logic failed as you need to reassign content to the new string,str.replace is not an inplace operation, it creates a new string:
content = content.replace("D:\\Outgoing\\", "X:\\Incoming\\")
Using the json approach just do a replace too, using the current value:
settings['normaldir'] = settings['normaldir'].replace("D:\\Outgoing\\", "X:\\Incoming\\")
You also would want truncate() before you write or just reopen the file with w and dump/write the new value, if you really wanted to just keep the last 8 chars and prepend a string:
settings['normaldir'] = "X:\\Incoming\\" + settings['normaldir'][-8:]
Python come with a json library.
With this library, you can read and write JSON files (or JSON strings).
Parsed data is converted to Python objects and vice versa.
To use the json library, simply import it:
import json
Say your data is stored in input_data.json file.
input_data_path = "input_data.json"
You read the file like this:
import io
with io.open(input_data_path, mode="rb") as fd:
obj = json.load(fd)
or, alternatively:
with io.open(input_data_path, mode="rb") as fd:
content = fd.read()
obj = json.loads(content)
Your data is automatically converted into Python objects, here you get a dict:
print(repr(obj))
# {u'projectdir': u'D:\\Outgoing\\1622_2\\',
# u'normaldir': u'D:\\Outgoing\\1621_1\\'}
note: I'm using Python 2.7 so you get the unicode string prefixed by "u", like u'projectdir'.
It's now easy to change the values for normaldir and projectdir:
obj["normaldir"] = "X:\\Incoming\\1621_1\\"
obj["projectdir"] = "X:\\Incoming\\1622_2\\"
Since obj is a dict, you can also use the update method like this:
obj.update({'normaldir': "X:\\Incoming\\1621_1\\",
'projectdir': "X:\\Incoming\\1622_2\\"})
That way, you use a similar syntax like JSON.
Finally, you can write your Python object back to JSON file:
output_data_path = "output_data.json"
with io.open(output_data_path, mode="wb") as fd:
json.dump(obj, fd)
or, alternatively with indentation:
content = json.dumps(obj, indent=True)
with io.open(output_data_path, mode="wb") as fd:
fd.write(content)
Remarks: reading/writing JSON objects is faster with a buffer (the content variable).
.replace returns a new string, and don't change it. But you should not treat json-files as normal text files, so you can combine parsing json with replace:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'rb') as settingsData:
settings = json.load(settingsData)
settings = {k: v.replace("D:\\Outgoing\\", "X:\\Incoming\\")
for k, v in settings.items()
}
with open(configfile, 'wb') as settingsData:
json.dump(settings, settingsData)
I'm trying to write some code to create a file that will write data about a "character". I've been able to write strings using:
f = open('player.txt','w')
f.write("Karatepig")
f.close()
f = open('player.txt','r')
f.read()
The issue is, how do I store something other than a string to a file? Can I convert it from a string to a value?
Files can only store strings, so you have to convert other values to strings when writing, and converting them back to original values when reading.
The Python standard library has a whole section dedicated to data persistence that can help make this task easier.
However, for simple types, it is perhaps easiest to use the json module to serialize data to a file and read it back again with ease:
import json
def write_data(data, filename):
with open(filename, 'w') as outfh:
json.dump(data, outfh)
def read_data(filename):
with open(filename, 'r') as infh:
json.load(infh)
Using string.Template I want to store the values to substitute into the template in separate files that I can loop through.
Looping is the easy part. I then want to run
result = s.safe_substitute(title=titleVar, content=contentVar)
on my template. I’m just a little stumped in what format to store these values in a text file and how to read that file with python.
What you are looking for is call serialization. In this case, you want to serialize a dict, such as
values = dict(title='titleVar', content='contentVar')
There are may ways to serialize, using XML, pickle, YAML, JSON formats for example. Here is how you could do it with JSON:
import string
import json
values = dict(title='titleVar', content='contentVar')
with open('/tmp/values', 'w') as f:
json.dump(values, f)
with open('/tmp/values', 'r') as f:
newvals = json.load(f)
s = string.Template('''\
$title
$content''')
result = s.safe_substitute(newvals)
print(result)
Is there a short way to get number of objects in pickled file - shorter than writing a function that opens the file, keeps calling pickle.load method and updating num_of_objs by 1 until it catches EOFError and returns the value?
No, there isn't. The pickle format does not store that information.
If you need that type of metadata, you need to add it to the file yourself when writing:
pickle.dump(len(objects), fileobj)
for ob in objects:
pickle.dump(ob, fileobj)
Now the first record tells you how many more are to follow.
There is no direct way of finding the length of a pickle, but if you are afraid of running an endless loop you could try the following,
company_id_processed=[]
with open("responses_pickle.pickle", "rb") as f:
while True:
try:
current_id=pickle.load(f)['name']
company_id_processed.append(current_id)
except EOFError:
print 'Pickle ends'
break
The best way is to store and load data object with descriptive file name. For example, if you want to save two dataframes, you can name the pickle file as "datasets_name_2DFs.pickle". When you want to load them, you can simply get the number in the file name and use for loop equals to that number to get the pickle object. This is easier for me. For the code part, you can do what ever suits you.
Or you can use other methods like this:
with open(path, "wb") as f:
pickle.dump(len(data), f)
for value in data:
pickle.dump(value, f)
data_list = []
with open(path, "rb") as f:
for _ in range(pickle.load(f)):
data_list.append(pickle.load(f))
print data_list
I am python beginner struggling to create and save a list containing tuples from csv file in python.
The code I got for now is:
def load_file(filename):
fp = open(filename, 'Ur')
data_list = []
for line in fp:
data_list.append(line.strip().split(','))
fp.close()
return data_list
and then I would like to save the file
def save_file(filename, data_list):
fp = open(filename, 'w')
for line in data_list:
fp.write(','.join(line) + '\n')
fp.close()
Unfortunately, my code returns a list of lists, not a list of tuples... Is there a way to create one list containing multiple tuples without using csv module?
split returns a list, if you want a tuple, convert it to a tuple:
data_list.append(tuple(line.strip().split(',')))
Please use the csv module.
First question: why is a list of lists bad? In the sense of "duck-typing", this should be fine, so maybe you think about it again.
If you really need a list of tuples - only small changes are needed.
Change the line
data_list.append(line.strip().split(','))
to
data_list.append(tuple(line.strip().split(',')))
That's it.
If you ever want to get rid of custom code (less code is better code), you could stick to the csv-module. I'd strongly recommend using as many library methods as possible.
To show-off some advanced Python features: your load_file-method could also look like:
def load_file(filename):
with open(filename, 'Ur') as fp:
data_list = [tuple(line.strip().split(",") for line in fp]
I use a list comprehension here, it's very concise and easy to understand.
Additionally, I use the with-statement, which will close your file pointer, even if an exception occurred within your code. Please always use with when working with external resources, like files.
Just wrap "tuple()" around the line.strip().split(',') and you'll get a list of tuples. You can see it in action in this runnable gist.