Python: Transform a string into a variable and its value - python

I am relatively new on Python.
The program I am writing reads line by line a XML file using a while loop. The data read is split so the information that I get is something like:
datas = ['Name="Date"', 'Tag="0x03442333"', 'Level="Acquisition"', 'Type="String"']
-Inside my program, I want to assign to some variables called exactly as the word before the = sign, the information after the = sign in the previous strings. And then I will introduce them as attributes for a class (this already works)
- What I have done until the moment is:
Name = ''
Tag = ''
Level = ''
Type = ''
for i in datas:
exec(i)
-It works fine that way. However, I do not want to use the exec function. Is there any other way of doing that?
Thank you

exec is generally the way to go about this. You could also add it to the globals() dictionary directly, but this would be slightly dangerous sometimes.
for pair in datas:
name, value = pair.split("=")
globals()[name] = eval(value)

You are right that you should avoid exec for security reasons, and you should probably keep the field values in a dict or similar structure. It's better to let a Python library do the whole parsing. For example, using ElementTree:
import xml.etree.ElementTree as ET
tree = ET.parse('myfile.xml')
root = tree.getroot()
and then iterating over root and its children, depending on how exactly your XML data looks like.

At most what you expect to do is discussed here. To convert string to variable name.
But what you should ideally do is to create a dictionary. Like this.
for i in datas:
(key,value)=i.split("=")
d[key] = eval(value)
NOTE: Still avoid using eval.

As Pelle Nilsson says, you should use a proper XML parser for this. However, if the data is simple and the format of your XML file is stable, you can do it by hand.
Do you have a particular reason to put this data into a class? A dictionary may be all you need:
datas = ['Name="Date"', 'Tag="0x03442333"', 'Level="Acquisition"', 'Type="String"']
datadict = {}
for s in datas:
key, val = s.split('=')
datadict[key] = val.strip('"')
print(datadict)
output
{'Tag': '0x03442333', 'Type': 'String', 'Name': 'Date', 'Level': 'Acquisition'}
You can pass such a dictionary to a class, if you want:
class MyClass(object):
def __init__(self, data):
self.__dict__ = data
def __repr__(self):
s = ', '.join('{0}={1!r}'.format(k,v) for k, v in self.__dict__.items())
return 'Myclass({0})'.format(s)
a = MyClass(datadict)
print(a)
print(a.Name, a.Tag)
output
Myclass(Tag='0x03442333', Type='String', Name='Date', Level='Acquisition')
Date 0x03442333
All of the code in this answer should work correctly on any recent version of Python 2 as well as on Python 3, although if you run it on Python 2 you should put
from __future__ import print_function
at the top of the script.

Related

Python Dictionary as Valid jSon?

in python I have:
dict = {}
dict['test'] = 'test'
when I print I get:
{'test':'test'}
How can I make it like this:
{"test":"test"}
Please Note, replace won't work as test may be test't...
I tried:
dict = {}
dict["test"] = "test"
You can use json.dumps()
For example, if you use print json.dumps(dict) you should get the desired output.
Additionally, as suggested in a different related question, you may construct your own version of a dict with special printing:
How to create a Python dictionary with double quotes as default quote format?

How do i 'professionally' store small data in python? [duplicate]

I need to store basic data of customer's and cars that they bought and payment schedule of these cars. These data come from GUI, written in Python. I don't have enough experience to use a database system like sql, so I want to store my data in a file as plain text. And it doesn't have to be online.
To be able to search and filter them, first I convert my data (lists of lists) to the string then when I need the data re-convert to the regular Python list syntax. I know it is a very brute-force way, but is it safe to do like that or can you advice me to another way?
It is never safe to save your database in a text format (or using pickle or whatever). There is a risk that problems while saving the data may cause corruption. Not to mention risks with your data being stolen.
As your dataset grows there may be a performance hit.
have a look at sqlite (or sqlite3) which is small and easier to manage than mysql. Unless you have a very small dataset that will fit in a text file.
P/S: btw, using berkeley db in python is simple, and you don't have to learn all the DB things, just import bsddb
The answer to use pickle is good, but I personally prefer shelve. It allows you to keep variables in the same state they were in between launches and I find it easier to use than pickle directly. http://docs.python.org/library/shelve.html
I agree with the others that serious and important data would be more secure in some type of light database but can also feel sympathy for the wish to keep things simple and transparent.
So, instead of inventing your own text-based data-format I would suggest you use YAML
The format is human-readable for example:
List of things:
- Alice
- Bob
- Evan
You load the file like this:
>>> import yaml
>>> file = open('test.yaml', 'r')
>>> list = yaml.load(file)
And list will look like this:
{'List of things': ['Alice', 'Bob', 'Evan']}
Of course you can do the reverse too and save data into YAML, the docs will help you with that.
At least another alternative to consider :)
very simple and basic - (more # http://pastebin.com/A12w9SVd)
import json, os
db_name = 'udb.db'
def check_db(name = db_name):
if not os.path.isfile(name):
print 'no db\ncreating..'
udb = open(db_name,'w')
udb.close()
def read_db():
try:
udb = open(db_name, "r")
except:
check_db()
read_db()
try:
dicT = json.load(udb)
udb.close()
return dicT
except:
return {}
def update_db(newdata):
data = read_db()
wdb = dict(data.items() + newdata.items())
udb = open(db_name, 'w')
json.dump(wdb, udb)
udb.close()
using:
def adduser():
print 'add user:'
name = raw_input('name > ')
password = raw_input('password > ')
update_db({name:password})
You can use this lib to write an object into a file http://docs.python.org/library/pickle.html
Writing data in a file isn't a safe way for datastorage. Better use a simple database libary like sqlalchemy. It is a ORM for easy database usage...
You can also keep simple data in plain text file. Then you have not much support, however, to check consistency of data, double values etc.
Here is my simple 'card file' type data in text file code snippet using namedtuple so that you can access values not only by index in line but by they header name:
# text based data input with data accessible
# with named fields or indexing
from __future__ import print_function ## Python 3 style printing
from collections import namedtuple
import string
filein = open("sample.dat")
datadict = {}
headerline = filein.readline().lower() ## lowercase field names Python style
## first non-letter and non-number is taken to be the separator
separator = headerline.strip(string.lowercase + string.digits)[0]
print("Separator is '%s'" % separator)
headerline = [field.strip() for field in headerline.split(separator)]
Dataline = namedtuple('Dataline',headerline)
print ('Fields are:',Dataline._fields,'\n')
for data in filein:
data = [f.strip() for f in data.split(separator)]
d = Dataline(*data)
datadict[d.id] = d ## do hash of id values for fast lookup (key field)
## examples based on sample.dat file example
key = '123'
print('Email of record with key %s by field name is: %s' %
(key, datadict[key].email))
## by number
print('Address of record with key %s by field number is: %s' %
(key ,datadict[key][3]))
## print the dictionary in separate lines for clarity
for key,value in datadict.items():
print('%s: %s' % (key, value))
input('Ready') ## let the output be seen when run directly
""" Output:
Separator is ';'
Fields are: ('id', 'name', 'email', 'homeaddress')
Email of record with key 123 by field name is: gishi#mymail.com
Address of record with key 123 by field number is: 456 happy st.
345: Dataline(id='345', name='tony', email='tony.veijalainen#somewhere.com', homeaddress='Espoo Finland')
123: Dataline(id='123', name='gishi', email='gishi#mymail.com', homeaddress='456 happy st.')
Ready
"""

How to efficiently detect an XML schema without having the entire file in python

I have a very large feed file that is sent as an XML document (5GB). What would be the fastest way to parse the structure of the main item node without previously knowing its structure? Is there a means in Python to do so 'on-the-fly' without having the complete xml loaded in memory? For example, what if I just saved the first 5MB of the file (by itself it would be invalid xml, as it wouldn't have ending tags) -- would there be a way to parse the schema from that?
Update: I've included an example XML fragment here: https://hastebin.com/uyalicihow.xml. I'm looking to extract something like a dataframe (or list or whatever other data structure you want to use) similar to the following:
Items/Item/Main/Platform Items/Item/Info/Name
iTunes Chuck Versus First Class
iTunes Chuck Versus Bo
How could this be done? I've added a bounty to encourage answers here.
Several people have misinterpreted this question, and re-reading it, it's really not at all clear. In fact there are several questions.
How to detect an XML schema
Some people have interpreted this as saying you think there might be a schema within the file, or referenced from the file. I interpreted it as meaning that you wanted to infer a schema from the content of the instance.
What would be the fastest way to parse the structure of the main item node without previously knowing its structure?
Just put it through a parser, e.g. a SAX parser. A parser doesn't need to know the structure of an XML file in order to split it up into elements and attributes. But I don't think you actually want the fastest parse possible (in fact, I don't think performance is that high on your requirements list at all). I think you want to do something useful with the information (you haven't told us what): that is, you want to process the information, rather than just parsing the XML.
Is there a python utility that can do so 'on-the-fly' without having
the complete xml loaded in memory?
Yes, according to this page which mentions 3 event-based XML parsers in the Python world: https://wiki.python.org/moin/PythonXml (I can't vouch for any of them)
what if I just saved the first 5MB of the file (by itself it would be invalid xml, as it wouldn't have ending tags) -- would there be a way to parse the schema from that?
I'm not sure you know what the verb "to parse" actually means. Your phrase certainly suggests that you expect the file to contain a schema, which you want to extract. But I'm not at all sure you really mean that. And in any case, if it did contain a schema in the first 5Mb, you could find it just be reading the file sequentially, there would be no need to "save" the first part of the file first.
Question: way to parse the structure of the main item node without previously knowing its structure
This class TopSequenceElement parse a XML File to find all Sequence Elements.
The default is, to break at the FIRST closing </...> of the topmost Element.
Therefore, it is independend of the file size or even by truncated files.
from lxml import etree
from collections import OrderedDict
class TopSequenceElement(etree.iterparse):
"""
Read XML File
results: .seq == OrderedDict of Sequence Element
.element == topmost closed </..> Element
.xpath == XPath to top_element
"""
class Element:
"""
Classify a Element
"""
SEQUENCE = (1, 'SEQUENCE')
VALUE = (2, 'VALUE')
def __init__(self, elem, event):
if len(elem):
self._type = self.SEQUENCE
else:
self._type = self.VALUE
self._state = [event]
self.count = 0
self.parent = None
self.element = None
#property
def state(self):
return self._state
#state.setter
def state(self, event):
self._state.append(event)
#property
def is_seq(self):
return self._type == self.SEQUENCE
def __str__(self):
return "Type:{}, Count:{}, Parent:{:10} Events:{}"\
.format(self._type[1], self.count, str(self.parent), self.state)
def __init__(self, fh, break_early=True):
"""
Initialize 'iterparse' only to callback at 'start'|'end' Events
:param fh: File Handle of the XML File
:param break_early: If True, break at FIRST closing </..> of the topmost Element
If False, run until EOF
"""
super().__init__(fh, events=('start', 'end'))
self.seq = OrderedDict()
self.xpath = []
self.element = None
self.parse(break_early)
def parse(self, break_early):
"""
Parse the XML Tree, doing
classify the Element, process only SEQUENCE Elements
record, count of end </...> Events,
parent from this Element
element Tree of this Element
:param break_early: If True, break at FIRST closing </..> of the topmost Element
:return: None
"""
parent = []
try:
for event, elem in self:
tag = elem.tag
_elem = self.Element(elem, event)
if _elem.is_seq:
if event == 'start':
parent.append(tag)
if tag in self.seq:
self.seq[tag].state = event
else:
self.seq[tag] = _elem
elif event == 'end':
parent.pop()
if parent:
self.seq[tag].parent = parent[-1]
self.seq[tag].count += 1
self.seq[tag].state = event
if self.seq[tag].count == 1:
self.seq[tag].element = elem
if break_early and len(parent) == 1:
break
except etree.XMLSyntaxError:
pass
finally:
"""
Find the topmost completed '<tag>...</tag>' Element
Build .seq.xpath
"""
for key in list(self.seq):
self.xpath.append(key)
if self.seq[key].count > 0:
self.element = self.seq[key].element
break
self.xpath = '/'.join(self.xpath)
def __str__(self):
"""
String Representation of the Result
:return: .xpath and list of .seq
"""
return "Top Sequence Element:{}\n{}"\
.format( self.xpath,
'\n'.join(["{:10}:{}"
.format(key, elem) for key, elem in self.seq.items()
])
)
if __name__ == "__main__":
with open('../test/uyalicihow.xml', 'rb') as xml_file:
tse = TopSequenceElement(xml_file)
print(tse)
Output:
Top Sequence Element:Items/Item
Items :Type:SEQUENCE, Count:0, Parent:None Events:['start']
Item :Type:SEQUENCE, Count:1, Parent:Items Events:['start', 'end', 'start']
Main :Type:SEQUENCE, Count:2, Parent:Item Events:['start', 'end', 'start', 'end']
Info :Type:SEQUENCE, Count:2, Parent:Item Events:['start', 'end', 'start', 'end']
Genres :Type:SEQUENCE, Count:2, Parent:Item Events:['start', 'end', 'start', 'end']
Products :Type:SEQUENCE, Count:1, Parent:Item Events:['start', 'end']
... (omitted for brevity)
Step 2: Now, you know there is a <Main> Tag, you can do:
print(etree.tostring(tse.element.find('Main'), pretty_print=True).decode())
<Main>
<Platform>iTunes</Platform>
<PlatformID>353736518</PlatformID>
<Type>TVEpisode</Type>
<TVSeriesID>262603760</TVSeriesID>
</Main>
Step 3: Now, you know there is a <Platform> Tag, you can do:
print(etree.tostring(tse.element.find('Main/Platform'), pretty_print=True).decode())
<Platform>iTunes</Platform>
Tested with Python:3.5.3 - lxml.etree:3.7.1
For very big files, reading is always a problem. I would suggest a simple algorithmic behavior for the reading of the file itself. The key point is always the xml tags inside the files. I would suggest you read the xml tags and sort them inside a heap and then validate the content of the heap accordingly.
Reading the file should also happen in chunks:
import xml.etree.ElementTree as etree
for event, elem in etree.iterparse(xmL, events=('start', 'end', 'start-ns', 'end-ns')):
store_in_heap(event, element)
This will parse the XML file in chunks at a time and give it to you at every step of the way. start will trigger when a tag is first encountered. At this point elem will be empty except for elem.attrib that contains the properties of the tag. end will trigger when the closing tag is encountered, and everything in-between has been read.
you can also benefit from the namespaces that are in start-ns and end-ns. ElementTree has provided this call to gather all the namespaces in the file.
Refer to this link for more information about namespaces
My interpretation of your needs is that you want to be able to parse the partial file and build up the structure of the document as you go. I've taken some assumptions from the file you uploaded:
Fundamentally you want to be parsing collections of things which have similar properties - I'm inferring this from the way you presented your desired output as a table with rows containing the values.
You expect these collections of things to have the same number of values.
You need to be able to parse partial files.
You don't worry about the properties of elements, just their contents.
I'm using xml.sax as this deals with arbitrarily large files and doesn't need to read the whole file into memory. Note that the strategy I'm following now doesn't actually scale that well as I'm storing all the elements in memory to build the dataframe, but you could just as well output the paths and contents.
In the sample file there is a problem with having one row per Item since there are multiples of the Genre tag and there are also multiple Product tags. I've handled the repeated Genre tags by appending them. This relies on the Genre tags appearing consecutively. It is not at all clear how the Product relationships can be handled in a single table.
import xml.sax
from collections import defaultdict
class StructureParser(xml.sax.handler.ContentHandler):
def __init__(self):
self.text = ''
self.path = []
self.datalist = defaultdict(list)
self.previouspath = ''
def startElement(self, name, attrs):
self.path.append(name)
def endElement(self, name):
strippedtext = self.text.strip()
path = '/'.join(self.path)
if strippedtext != '':
if path == self.previouspath:
# This handles the "Genre" tags in the sample file
self.datalist[path][-1] += f',{strippedtext}'
else:
self.datalist[path].append(strippedtext)
self.path.pop()
self.text = ''
self.previouspath = path
def characters(self, content):
self.text += content
You'd use this like this:
parser = StructureParser()
try:
xml.sax.parse('uyalicihow.xml', parser)
except xml.sax.SAXParseException:
print('File probably ended too soon')
This will read the example file just fine.
Once this has read and probably printed "File probably ended to soon", you have the parsed contents in parser.datalist.
You obviously want to have just the parts which read successfully, so you can figure out the shortest list and build a DataFrame with just those paths:
import pandas as pd
smallest_items = min(len(e) for e in parser.datalist.values())
df = pd.DataFrame({key: value for key, value in parser.datalist.items() if len(value) == smallest_items})
This gives something similar to your desired output:
Items/Item/Main/Platform Items/Item/Main/PlatformID Items/Item/Main/Type
0 iTunes 353736518 TVEpisode
1 iTunes 495275084 TVEpisode
The columns for the test file which are matched here are
>> df.columns
Index(['Items/Item/Main/Platform', 'Items/Item/Main/PlatformID',
'Items/Item/Main/Type', 'Items/Item/Main/TVSeriesID',
'Items/Item/Info/BaseURL', 'Items/Item/Info/EpisodeNumber',
'Items/Item/Info/HighestResolution',
'Items/Item/Info/LanguageOfMetadata', 'Items/Item/Info/LastModified',
'Items/Item/Info/Name', 'Items/Item/Info/ReleaseDate',
'Items/Item/Info/ReleaseYear', 'Items/Item/Info/RuntimeInMinutes',
'Items/Item/Info/SeasonNumber', 'Items/Item/Info/Studio',
'Items/Item/Info/Synopsis', 'Items/Item/Genres/Genre',
'Items/Item/Products/Product/URL'],
dtype='object')
Based on your comments, it appears as though it is more important to you to have all the elements represented, but perhaps just showing a preview, in which case you can perhaps use only the first elements from the data. Note that in this case the Products entries won't match the Item entries.
df = pd.DataFrame({key: value[:smallest_items] for key, value in parser.datalist.items()})
Now we get all the paths:
>> df.columns
Index(['Items/Item/Main/Platform', 'Items/Item/Main/PlatformID',
'Items/Item/Main/Type', 'Items/Item/Main/TVSeriesID',
'Items/Item/Info/BaseURL', 'Items/Item/Info/EpisodeNumber',
'Items/Item/Info/HighestResolution',
'Items/Item/Info/LanguageOfMetadata', 'Items/Item/Info/LastModified',
'Items/Item/Info/Name', 'Items/Item/Info/ReleaseDate',
'Items/Item/Info/ReleaseYear', 'Items/Item/Info/RuntimeInMinutes',
'Items/Item/Info/SeasonNumber', 'Items/Item/Info/Studio',
'Items/Item/Info/Synopsis', 'Items/Item/Genres/Genre',
'Items/Item/Products/Product/URL',
'Items/Item/Products/Product/Offers/Offer/Price',
'Items/Item/Products/Product/Offers/Offer/Currency'],
dtype='object')
There are a number of tools around that will generate a schema from a supplied instance document. I don't know how many of them will work on a 5Gb input file, and I don't know how many of them can be invoked from Python.
Many years ago I wrote a Java-based, fully streamable tool to generate a DTD from an instance document. It hasn't been touched in years but it should still run: https://sourceforge.net/projects/saxon/files/DTDGenerator/7.0/dtdgen7-0.zip/download?use_mirror=vorboss
There are other tools listed here: Any tools to generate an XSD schema from an XML instance document?
As I see it your question is very clear. I give it a plus one up vote for clearness. You are wanting to parse text.
Write a little text parser, we can call that EditorB, that reads in chunks of the file or at least line by line. Then edit or change it as you like and re-save that chunk or line.
It can be easy in Windows from 98SE on. It should be easy in other operating systems.
The process is (1) Adjust (manually or via program), as you currently do, we can call this EditorA, that is editing your XML document, and save it; (2) stop EditorA; (3) Run your parser or editor, EditorB, on the saved XML document either manually or automatically (started via detecting that the XML document has changed via date or time or size, etc.); (4) Using EditorB, save manually or automatically the edits from step 3; (5) Have your EditorA reload the XML document and go on from there; (6) do this as often as is necessary, making edits with EditorA and automatically adjusting them outside of EditorA by using EditorB.
Edit this way before you send the file.
It is a lot of typing to explain, but XML is just a glorified text document. It can be easily parsed and edited and saved, either character by character or by larger amounts line by line or in chunks.
As a further note, this can be applied via entire directory contained documents or system wide documents as I have done in the past.
Make certain that EditorA is stopped before EditorB is allowed to start it's changing. Then stop EditorB before restarting EditorA. If you set this up as I described, then EditorB can be run continually in the background, but put in it an automatic notifier (maybe a message box with options, or a little button that is set formost on the screen when activated) that allows you to turn off (on continue with) EditorA before using EditorB. Or, as I would do it, put in a detector to keep EditorB from executing its own edits as long as EditorA is running.
B Lean

How to read and assign variables from an API return that's formatted as Dictionary-List-Dictionary?

So I'm trying to learn Python here, and would appreciate any help you guys could give me. I've written a bit of code that asks one of my favorite websites for some information, and the api call returns an answer in a dictionary. In this dictionary is a list. In that list is a dictionary. This seems crazy to me, but hell, I'm a newbie.
I'm trying to assign the answers to variables, but always get various error messages depending on how I write my {},[], or (). Regardless, I can't get it to work. How do I read this return? Thanks in advance.
{
"answer":
[{"widgets":16,
"widgets_available":16,
"widgets_missing":7,
"widget_flatprice":"156",
"widget_averages":15,
"widget_cost":125,
"widget_profit":"31",
"widget":"90.59"}],
"result":true
}
Edited because I put in the wrong sample code.
You need to show your code, but the de-facto way of doing this is by using the requests module, like this:
import requests
url = 'http://www.example.com/api/v1/something'
r = requests.get(url)
data = r.json() # converts the returned json into a Python dictionary
for item in data['answer']:
print(item['widgets'])
Assuming that you are not using the requests library (see Burhan's answer), you would use the json module like so:
data = '{"answer":
[{"widgets":16,
"widgets_available":16,
"widgets_missing":7,
"widget_flatprice":"156",
"widget_averages":15,
"widget_cost":125,
"widget_profit":"31",
"widget":"90.59"}],
"result":true}'
import json
data = json.loads(data)
# Now you can use it as you wish
data['answer'] # and so on...
First I will mention that to access a dictionary value you need to use ["key"] and not {}. see here an Python dictionary syntax.
Here is a step by step walkthrough on how to build and access a similar data structure:
First create the main dictionary:
t1 = {"a":0, "b":1}
you can access each element by:
t1["a"] # it'll return a 0
Now lets add the internal list:
t1["a"] = ["x",7,3.14]
and access it using:
t1["a"][2] # it'll return 3.14
Now creating the internal dictionary:
t1["a"][2] = {'w1':7,'w2':8,'w3':9}
And access:
t1["a"][2]['w3'] # it'll return 9
Hope it helped you.

Variable name to string in Python

I have written some code in Python and I would like to save some of the variables into files having the name of the variable.
The following code:
variableName1 = 3.14
variableName2 = 1.09
save_variable_to_file(variableName1) # function to be defined
save_variable_to_file(variableName1)
should create 2 files, one named variableName1.txt and the other one variableName2.txt and each should contain the value of the corresponding variable at the time the function was called.
This variable name is supposed to be unique in the code.
Thanks!
HM
It is possible to find the name of a variable, a called function can get its caller's variables using:
import sys
def save_variable_to_file(var):
names = sys._getframe(1).f_globals
names.update(sys._getframe(1).f_locals)
for key in names.keys():
if names[key] == var:
break
if key:
open(key+".txt","w").write(str(var))
else:
print(key,"not found")
thing = 42
save_variable_to_file(thing)
But it is probably a really bad idea. Note that I have converted the value to a string, how would you want dictionaries and lists to be saved? How are you going to reconstruct the variable later?
import glob
for fname in glob.iglob("*.txt"):
vname = fname.rstrip(".txt")
value = open(fname).read()
exec(vname + "=" + value)
print(locals())
Using something like exec can be a security risk, it is probably better to use something like a pickle, JSON, or YAML.
No, this won't work. When you write the variable name there, it gives the value of the variable, 3.14 and so on, to save_variable_to_file(). Try one of these variants:
d = dict(my_variable_name=3.14)
save_variable_to_file('my_variable_name', d['my_variable_name'])
or:
var_name = 'my_variable_name'
d = {var_name: 3.14}
save_variable_to_file(var_name, d[var_name])
Here is a good tutorial, that you should definitely go through, if you're serious about learning Python.
Unfortunately it is not possible to find out the name of a variable. Either you extend your function to also allow a string as a parameter, or you have to use another solution:
save_variable_to_file("variableName1", variableName1)
Another solution would be to store your variables within a dict which allows the retrieval of the keys as well:
myVariables = {}
myVariables["variableName1"] = 3.14
for key, value in myVariables.items():
save_variable_to_file(key, value)
How about using a dict:
var_dict = {'variable1': 3.14, 'variable2':1.09}
for key, value in var_dict.items():
with open('path\%s'%key, "w") as file:
file.write(value)
for key, value in locals().items():
with open(key + '.txt', 'w') as f:
f.write(value)
should do your trick - as long as all locally defined variables are to be considered.
A far better solution, however, would be to put all you need into a dict and act as the others already proposed.
You could even do this:
def save_files(**files):
for key, value in files.items():
with open(key + '.txt', 'w') as f:
f.write(value)
and then
save_files(**my_previously_defined_dict)
save_files(**locals()) # as example above
save_files(filename='content')
Variables don't have names. When your script runs it creates one or more identifiers for some of the objects it interacts with commonly called variables, but these are temporary and cease to exist when the program ends. Even if you save them, you will then be faced with the reverse problem of how to turn them back into an identifier with saved name associated with the saved value.
If you want to get the name of your variable as string, use python-varname package (python3):
from varname import nameof
s = 'Hey!'
print (nameof(s))
Output:
s
Get the package here:
https://github.com/pwwang/python-varname

Categories

Resources