How do I import text file dictionary into python dictionary - python

I am creating a tournament scoring system application.
There are many sections to this app, for example, Participants, Team, Events and Award points.
Right now I am working on the team section. I was able to create something that will allow the user to create teams.
The code looks like this.
teamname = input("Team Name: ").title()
First_member = input("First Member Full Name: ").title()
if any(i.isdigit() for i in First_member):
print("Invalid Entry")
else:
There can only be 5 members in each team.
This is how the data is saved
combine = '\''+teamname+'\' : \''+First_member+'\', \''+Second_member+'\',
\''+Third_member+'\', \''+Forth_member+'\', \''+Fifth_member+'\'',
myfile = open("teams.txt", "a+")
myfile.writelines(combine)
myfile.writelines('\n')
myfile.close()
Now If I want to remove a team how do I do that?
Apologies if you feel like i am wasting your time but still thanks for stopping by.
If you want to see everything please check out this link
https://repl.it/#DaxitMahendra/pythoncode>

If text file creation is in your hand then you should just have proper YAML format of file. Your format is fairly similar to YAML.
Once you have YAML you can use PyYAML: https://pyyaml.org/wiki/PyYAMLDocumentation
This answer: YAML parsing and Python? has example of the format and how to parse as well.

you have to do a quick fix, please change your "combine" variable format to a valid "dict" format, then you can use the "ast" module, here a example
#proxy items
teamname,First_member,Second_member,Third_member,Forth_member,Fifth_member = [str(temp_name).zfill(3) for temp_name in range(6)]
#add "{" and "}" to start/end and add your values into a list "key":["value1", "value2",...]
combine = '{\''+teamname+'\' : [\''+First_member+'\', \''+Second_member+'\', \''+Third_member+'\', \''+Forth_member+'\', \''+Fifth_member+'\']}'
import ast
dict = ast.literal_eval(combine)
print dict.keys(), type(dict)
>>['000'] <type 'dict'>
for your case,
#change the "combine" variable
combine = '{\''+teamname+'\' : [\''+First_member+'\', \''+Second_member+'\', \''+Third_member+'\', \''+Forth_member+'\', \''+Fifth_member+'\']}'

Related

Finding lines by keywords and getting the value of said line in python

i am quite new to Python and i would like to ask the following:
Let's say for example i have the following .txt file:
USERNAME -- example.name
SERVER -- server01
COMPUTERNAME -- computer01
I would like to search through this document for the 3 keywords: USERNAME, SERVER and COMPUTERNAME and when I find these, I would like to extract their values, a.i "example.name", "server01" and "computer01" respectively for each line.
Is this possible? I have already tried for looking by line numbers, but I would much prefer searching by keywords instead.
Question 2: Somewhere in this .txt file exists a line with the keyword Adresses: which has multiple values but listed in different lines, like such:
Adresses:
6001:8000:1080:142::12
8002:2013:2380:110::53
9007:2013:2380:117::80
.
.
Would there be any way to get all of the listed addresses as a result, not just the first one? The number of said addresses is dynamic, so it may change in the future.
To this i have honestly no idea how to begin. I appreciate any kind of hints or pointing me in the right direction.
Thank you very much for your time and attention!
Like this:
with open("filename.txt") as f:
for x in f:
a = x.split(" -- ")
print(a[1])
If line with given value always starts with keyword you can try something like this
with open('file.txt', 'r') as file:
for line in file:
if line.startswith('keyword'):
keyword, value = line.split(' -- ')
and to gather all the addresses i'd initiate list of addresses beforehand, then add line
addresses.append(value)
inside of if statement
Your best friend for this kind of task will be str.split function. You can put your data in a dict which will map keywords to values :
data = {} # Create a new dict
with open('data.txt') as file: # Open the file containing the data
lines = file.read().split('\n') # Split the file in lines
for line in lines: # For each line
keyword, value = line.split(' -- ') # Extract keyword and value
data[keyword] = value # Put in the dict
Then, you can access your values with data['USERNAME'] for example. This method will work on any document containing a key-value association on each line (even if you have more than 3 keywords). However, it will not work if the same text file contains the addresses in the format you mentionned.
If you want to include the addresses in the same file, you'll need to adapt the code. You can for example check if splitted_line contains two elements (= key-value on the same line, like USERNAME) or only one (= key-value on multiple lines, like Addresses:). Then, you can store in a list all the different addresses, and bound this list to the data dict. It's not a problem to have a dict in the form :
{
'USERNAME': 'example.name',
'Addresses': ['6001:8000:1080:142::12', '8002:2013:2380:110::53']
}

How do i 'professionally' store small data in python? [duplicate]

I need to store basic data of customer's and cars that they bought and payment schedule of these cars. These data come from GUI, written in Python. I don't have enough experience to use a database system like sql, so I want to store my data in a file as plain text. And it doesn't have to be online.
To be able to search and filter them, first I convert my data (lists of lists) to the string then when I need the data re-convert to the regular Python list syntax. I know it is a very brute-force way, but is it safe to do like that or can you advice me to another way?
It is never safe to save your database in a text format (or using pickle or whatever). There is a risk that problems while saving the data may cause corruption. Not to mention risks with your data being stolen.
As your dataset grows there may be a performance hit.
have a look at sqlite (or sqlite3) which is small and easier to manage than mysql. Unless you have a very small dataset that will fit in a text file.
P/S: btw, using berkeley db in python is simple, and you don't have to learn all the DB things, just import bsddb
The answer to use pickle is good, but I personally prefer shelve. It allows you to keep variables in the same state they were in between launches and I find it easier to use than pickle directly. http://docs.python.org/library/shelve.html
I agree with the others that serious and important data would be more secure in some type of light database but can also feel sympathy for the wish to keep things simple and transparent.
So, instead of inventing your own text-based data-format I would suggest you use YAML
The format is human-readable for example:
List of things:
- Alice
- Bob
- Evan
You load the file like this:
>>> import yaml
>>> file = open('test.yaml', 'r')
>>> list = yaml.load(file)
And list will look like this:
{'List of things': ['Alice', 'Bob', 'Evan']}
Of course you can do the reverse too and save data into YAML, the docs will help you with that.
At least another alternative to consider :)
very simple and basic - (more # http://pastebin.com/A12w9SVd)
import json, os
db_name = 'udb.db'
def check_db(name = db_name):
if not os.path.isfile(name):
print 'no db\ncreating..'
udb = open(db_name,'w')
udb.close()
def read_db():
try:
udb = open(db_name, "r")
except:
check_db()
read_db()
try:
dicT = json.load(udb)
udb.close()
return dicT
except:
return {}
def update_db(newdata):
data = read_db()
wdb = dict(data.items() + newdata.items())
udb = open(db_name, 'w')
json.dump(wdb, udb)
udb.close()
using:
def adduser():
print 'add user:'
name = raw_input('name > ')
password = raw_input('password > ')
update_db({name:password})
You can use this lib to write an object into a file http://docs.python.org/library/pickle.html
Writing data in a file isn't a safe way for datastorage. Better use a simple database libary like sqlalchemy. It is a ORM for easy database usage...
You can also keep simple data in plain text file. Then you have not much support, however, to check consistency of data, double values etc.
Here is my simple 'card file' type data in text file code snippet using namedtuple so that you can access values not only by index in line but by they header name:
# text based data input with data accessible
# with named fields or indexing
from __future__ import print_function ## Python 3 style printing
from collections import namedtuple
import string
filein = open("sample.dat")
datadict = {}
headerline = filein.readline().lower() ## lowercase field names Python style
## first non-letter and non-number is taken to be the separator
separator = headerline.strip(string.lowercase + string.digits)[0]
print("Separator is '%s'" % separator)
headerline = [field.strip() for field in headerline.split(separator)]
Dataline = namedtuple('Dataline',headerline)
print ('Fields are:',Dataline._fields,'\n')
for data in filein:
data = [f.strip() for f in data.split(separator)]
d = Dataline(*data)
datadict[d.id] = d ## do hash of id values for fast lookup (key field)
## examples based on sample.dat file example
key = '123'
print('Email of record with key %s by field name is: %s' %
(key, datadict[key].email))
## by number
print('Address of record with key %s by field number is: %s' %
(key ,datadict[key][3]))
## print the dictionary in separate lines for clarity
for key,value in datadict.items():
print('%s: %s' % (key, value))
input('Ready') ## let the output be seen when run directly
""" Output:
Separator is ';'
Fields are: ('id', 'name', 'email', 'homeaddress')
Email of record with key 123 by field name is: gishi#mymail.com
Address of record with key 123 by field number is: 456 happy st.
345: Dataline(id='345', name='tony', email='tony.veijalainen#somewhere.com', homeaddress='Espoo Finland')
123: Dataline(id='123', name='gishi', email='gishi#mymail.com', homeaddress='456 happy st.')
Ready
"""

Apache NiFi: Processing multiple csv's using the ExecuteScript Processor

I have a csv with 70 columns. The 60th column contains a value which decides wether the record is valid or invalid. If the 60th column has 0, 1, 6 or 7 it's valid. If it contains any other value then its invalid.
I realised that this functionality wasn't possible relying completely on changing property's of processors in Apache NiFi. Therfore I decided to use the executeScript processor and added this python code as the text body.
import csv
valid =0
invalid =0
total =0
file2 = open("invalid.csv","w")
file1 = open("valid.csv","w")
with open('/Users/himsaragallage/Desktop/redder/Regexo_2019101812750.dat.csv') as f:
r = csv.reader(f)
for row in f:
# print row[1]
total +=1
if row[59] == "0" or row[59] == "1" or row[59] == "6" or row[59] == "7":
valid +=1
file1.write(row)
else:
invalid += 1
file2.write(row)
file1.close()
file2.close()
print("Total : " + str(total))
print("Valid : " + str(valid))
print("Invalid : " + str(invalid))
I have no idea how to use a session and code within the executeScript processor as shown in this question. So I just wrote a simple python code and directed the valid and invalid data to different files. This approach I have used has many limitations.
I want to be able to dynamically process csv's with different filenames.
The csv which the invalid data is sent to, must also have the same filename as the input csv.
There would be around 20 csv's in my redder folder. All of them must be processed in one go.
Hope you could suggest a method for me to do the following. Feel free to provide me with a solution by editing the python code I have used or even completely using a different set of processors and totally excluding the use of ExecuteScript Processer
Here is complete step-by-step instructions on how to use QueryRecord processor
Basically, you need to setup highlighted properties
You want to route records based on values from one column. There are various ways to make this happen in NiFi. I can think of the following:
Use QueryRecord processor to partition records by column values
Use RouteOnContent processor to route using a regular expression
Use ExecuteScript processor to create a custom routing logic
Use PartitionRecord processor to route based on RecordPaths
I show you how to solve your problem using PartitionRecord processor. Since you did not provide any example data I created an example use case. I want to distinguish cities in Europe from cities elsewhere. Following data is given:
id,city,country
1,Berlin,Germany
2,Paris,France
3,New York,USA
4,Frankfurt,Germany
Flow:
GenerateFlowFile:
PartitionRecord:
CSVReader should be setup to infer schema and CSVRecordSetWriter to inherit schema. PartitionRecord will group records by country and pass them on together with an attribute country that has the country value. You will see following groups of records:
id,city,country
1,Berlin,Germany
4,Frankfurt,Germany
id,city,country
2,Paris,France
id,city,country
3,New York,USA
Each group is a flowfile and will have the country attribute, which you will use to route the groups.
RouteOnAttribute:
All countries from Europe will be routed to the is_europe relationship. Now you can apply the same strategy to your use case.

I'm trying to make a Magic the Gathering app, how should I go about storing the card attributes?

I'm working on my first non-school-related project and I'm a little bit stumped on how I should go about handling something. The project is in Python, but I guess the question itself is more conceptual/OOP-based. The goal of the project is to create an application that takes input for a 40 card MtG deck and displays some probability figures based on the contents of the deck. Using Scryfall's API I have a made a text document that contains the card information for a set of magic cards. The text document is actually just a bunch of lines with each line containing a JSON format dictionary and I have all the information I need in the text file. It looks something like this:
{ 'name' : 'card_1_name', 'attribute1' : ' card_1_attribute1', 'attribute2' : 'card_1_attribute2', 'etc' : 'etc'}
{ 'name' : 'card_2_name', 'attribute1' : ' card_2_attribute1', 'attribute2' : 'card_2_attribute2', 'etc' : 'etc'}
{ 'name' : 'card_3_name', 'attribute1' : ' card_3_attribute1', 'attribute2' : 'card_3_attribute2', 'etc' : 'etc'}
For those familiar with MtG, think:
{'name' : 'lightning bolt', 'color' : 'red', 'type' : 'instant', ...., 'etc' : 'etc'}
It goes on for about 250 different cards. I can read the document to get any information about any card I want, though because the formatting is weird I have to use a for-loop that reads each line individually and converts it to an actual python dictionary using ast.literal_eval to identify the line that contains the dictionary with the 'name' of the card I want.
Here's what I'm currently trying to do. I figured I could create a class 'card' that has the attributes that every magic card has, like a name and a color. I want to then make subclasses of the 'card' class that contain other attributes that are specific to certain card types, for example a card type in MtG is 'creature'. Only creatures have the 'power' attribute and the 'toughness' attribute. So the class 'creature' would inherit the basic attributes from 'card' and also have its own special attributes.
My line of thinking is that a deck of cards should be a list of card objects. My problem is I am having a hard time automating the process of making the card objects. This is the 'card' class I have currently:
class magic_card:
def __init__(self, card_dict): # card_dict is the dictionary that contains the card data
self.name = card_dict['name']
self.cmc = card_dict['cmc']
self.colors = card_dict['colors']
self.rarity = card_dict['rarity']
How can I automate the process of creating these ~250 card objects so that the 'card''s name is the name of the card instance? What i mean is I want to be able to make a loop that does this:
f = open("document_that_holds_card_information.txt", "r")
for i in range(1, 251):
card_dictionary = ast.literal_eval(f.readline())
card_name = (card_dictionary['name'])
card_name = magic_card(card_dictionary) # I want this magic_card instance's name to be the value stored in card_dictionary['name']
f.close()
Is this doable? Or am I overthinking things? My other thought is I could just reformat my text document so that it is easier to search through by formatting it to look like
['card_name1' : {#dictionary contents}, 'card_name2' : {#dictionary contents}, 'card_name3' : {#dictionary contents}....]
I'm really struggling with this and I feel like I might be tunnel visioning on a bad approach. Please help!
Edit: Alright the consensus seems to be that I am overthinking it. I've decided to reformat the file containing my card data to be in the proper JSON format instead of having multiple lines of JSON. Instead of having a deck of cards be a list of card objects, a deck of cards will be a list of card dictionaries

Building a ranking list in Python: how to assign scores to contestants?

(I posted this on the wrong section of Stackexchange before, sorry)
I'm working on a assignment which is way above my head. I've tried for days on end figuring out how to do this, but I just can't get it right...
I have to make a ranking list in which I can enter a user, alter the users score, register if he/she payed and display all users in sequence of who has the most score.
The first part I got to work with CSV, I've put only the basic part in here to save space. The menu and import csv have been done: (I had to translate a lot from my native language, sorry if there is a mistake, I know it's a bad habit).
more = True
while more:
print("-" * 40)
firstname = raw_input("What is the first name of the user?: ")
with open("user.txt", "a") as scoreFile:
scoreWrite = csv.writer(scoreFile)
scoreWrite.writerow([firstname, "0", "no"])
scoreFile.close()
mr_dnr = raw_input("Need to enter more people? If so, enter 'yes' \n")
more = mr_dnr in "yes \n"
This way I can enter the name. Now I need a second part (other option in the menu of course) to:
let the user enter the name of the person
after that enter the (new) score of that person.
So it needs to alter the second value in any entry in the csv file ("0") to something the user enters without erasing the name already in the csv file.
Is this even possible? A kind user suggested using SQlite3, but this basic CSV stuff is already stretching it far over my capabilities...
Your friend is right that SQlite3 would be a much better approach to this. If this is stretching your knowledge too far, I suggest having a directory called users and having one file per user. Use JSON (or pickle) to write the user information and overwrite the entire file each time you need to update it.

Categories

Resources