Defining/algorithm, python - python

Recently I started to learn python.
There was a need for the program.
Please point the track :
The program takes two values from a file: the enterprise and the number of people in them.
Next i need to make the following calculating : finding company with the fewest people - > define it as one part of all - > count ratio is less than all other companies - > count of the number of rations for some time, uniformly.
First, I don't know, what i need to do first. Should i define all data like this "Company = numbers", or do a dict?
I don't ask to solve the problem - I ask to teach.
B.R.

for your example if you want to keep it as easy as possible put all your data in dicts you don't need to create database or file to store the data, for each entity you have to create its variable as dict and make the relationship between them with id like this : we assume that we have client and orders entities:
client = {
1:{"name":"name_1","email":"name_1#gmail.com"},
2:{"name":"name_2","email":"name_2#gmail.com"}
}
orders={1:{"date":"2016-02-01","client_id":1}...}
and so on.

Related

Python convert list into split lists

so I have been given the task of using an api to pull student records and learnerID's to put into an in house application. The json formatting is dreadful and the only successful way I managed to split students individually is by the last value.
Now I am at the next stumbling block, I need to split these student lists into smaller sections so I implement a for loop as so:
student = request.text.split('"SENMajorNeedsDetails"')
for students in student:
r = str(student).split(',')
print (student[0], student[1])
print (r[0], r[1])
This works perfectly except this puts it all into a single list again and each student record isn't a set length (some have more values/fields than others).
so what I am looking to do is have a list for each student split on the comma, so student1 would equal [learnerID,personID,name,etc...]
this way when I want to reference the learnerID I can call learner1[0]
It is also very possible that I am going about this the wrong way and I should be doing some other form of list comprehension
my step by step process that I am aiming towards is:
pull data from system - DONE
split data into individual students - DONE
take learnerID,name,group of each student and add database entry
I have split step 3 into two stages where one involves my issue above and the second is the create database records
Below is a shortended example of the list item student[0], followed by student[1] if more is needed then say
:null},{"LearnerId":XXXXXX,"PersonId":XXXXXX,"LearnerCode":"XXXX-XXXXXX","UPN":"XXXXXXXXXXX","ULN":"XXXXXXXXXX","Surname":"XXXXX","Forename":"XXXXX","LegalSurname":"XXXXX","LegalForename":"XXXXXX","DateOfBirth":"XX/XX/XXXX 00:00:00","Year":"XX","Course":"KS5","DateOfEntry":"XX/XX/XXXX 00:00:00","Gender":"X","RegGroup":"1XX",],
:null},{"LearnerId":YYYYYYY,"PersonId":YYYYYYYY,"LearnerCode":"XXXX-YYYYYYYY","UPN":"YYYYYYYYYY","ULN":"YYYYYYYYYY","Surname":"YYYYYYYY","Forename":"YYYYYY","LegalSurname":"YYYYYY","LegalForename":"YYYYYYY","DateOfBirth":"XX/XX/XXXX 00:00:00","Year":"XX","Course":"KS5","DateOfEntry":"XX/XX/XXXX 00:00:00","Gender":"X","RegGroup":"1YY",],
Sorry doesn't like putting it on seperate lines
EDIT* changed wording at the end and added a redacted student record
Just to clarify the resolution to my issue was to learn how to parse JSON propperly, this was pointed out by #Patrick Haugh and all credit should go to him for pointing me in the right direction. Second most helpful person was #ArndtJonasson
The problem was that I was manually trying to do the job of the JSON library and I am no where near that level of competency yet. As stated originally it was totally likely that I was going about it in completely the wrong way.

Create variable for each unique value over two lists

Apologies in advance for the lengthy post. I am nominally familiar with Python, but think it might be able to easily accomplish the task. Some background:
I have survey data where respondents were asked to select the two schools they’re considering applying to out of a list of 1500 or so. The data are stored as two variables (one per institution selected – vname “Institution_1”, “Institution_2”) where each value uniquely identifies a particular institution.
Later on respondent rate the institutions they selected on a 1 to 6 scale on a series of attributes. Each of these ratings is stored as a separate scale variable in the data, and I have two of them – corresponding to what position the institution was selected in. If, for example, Adelphi University is “Institution_1” then the ratings on “Core academics” is stored in variable “Q.32_combined_1”; if Adelphi University is “Institution_2” then the ratings on “Core academics” is stored in variable “Q.36_combined_1”.
I want to combine the ratings for each institution and here’s the SPSS syntax for doing so for this one institution (Adelphi is uniquely identified with a meaningful value of 188429):
DO IF (Institution_1 = 188429).
COMPUTE Adelphi_CoreAcad=Q.32_combined_1.
ELSE IF (Institution_2 = 188429).
COMPUTE Adelphi_CoreAcad =Q.36_combined_1.
END IF.
EXECUTE.
But we have 1,000+ institutions in our data. How can we create a variable for each unique value over these two lists (Institution_1 and Institution_2).
Is there a way to use Python to create these variables and/or build the SPSS syntax that would work?
Thanks!
Try this. It's rough, since I don't have SPSS, but I think it's what you're asking for. (Note: I'm not sure that what you're asking for is the right thing, but see if it works, and maybe we'll go from there.)
This creates a set of variables named U188429_CoreAcad, etc. Where the U is just a leading prefix ("U" for "Unit ID"), 188429 is the unit id, and "CoreAcad" is a made up string you can change.
I used categories 'CoreAcad', 'PrettyCoeds', 'FootballTeam' and 'Drinking', because if I had it all to do over again, that's how I would have rated schools. (Except for 'CoreAcad,' which was your thing.)
I assumed that your categories were 32-35 for institution 1, and 36-39 for institution 2. You can change those below as well.
I assumed that you can spss.Submit a bunch of lines together. If not, split the string up and submit the lines one at a time.
I commented out "BEGIN PROGRAM", "import spss", "END PROGRAM" because I'm just feeding stuff into a command-line python2.7. Uncomment those for your use.
#BEGIN PROGRAM.
#import spss, spssaux
# According to the internet, unitids are sparse values.
Unit_ids = [
188429, # Adelphi
188430, # Random #s
171204,
100001,
]
Categories = {
'CoreAcad' : ('Q.32_combined_1', 'Q.36_combined_1'),
'PrettyCoeds' : ('Q.33_combined_1', 'Q.37_combined_1'),
'FootballTeam' : ('Q.34_combined_1', 'Q.38_combined_1'),
'Drinking' : ('Q.35_combined_1', 'Q.39_combined_1'),
}
code = """
DO IF (Institution_1 = %(unitid)d).
COMPUTE U%(unitid)d_%(category)s = %(answer1)s.
ELSE IF (Institution_2 = %(unitid)d).
COMPUTE U%(unitid)d_%(category)s = %(answer2)s.
END IF.
EXECUTE.
"""
for unitid in Unit_ids:
for category, answers in Categories.iteritems():
answer1,answer2 = answers
print(code%(locals()))
#spss.Submit(code%(locals()))
#END PROGRAM.
I suggest a different restructure solution:
First, you separate the two institutions into two lines, each with it's corresponding ratings:
varstocases /make institution from Institution_1 Institution_2
/make CoreAcad from Q.32_combined_1 Q.36_combined_1
/make otherRting from inst1var inst2var.
You can add another make subcommand for each additional rating that corresponds to each of the two institutions.
At this point your data has one line per single institution and it's ratings.
You can now analyze them, eg:
means CoreAcad otherRting by institution.
Or you can aggregate by institution to analyze their ratings. For example:
DATASET DECLARE AggByInst.
AGGREGATE /OUTFILE='AggByInst' /BREAK=institution
/MCoreAcad MotherRting =MEAN(CoreAcad otherRting).

GAE NDB model sequential ID [duplicate]

I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like.
A number MUST NOT BE used twice
Every number SHOULD BE used when exactly all smaller numbers have been used (no holes).
Fancy way of saying: I need to count 1,2,3,4 ...
The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day.
I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering".
Can this be implemented on Google AppEngine (preferably in Python)?
If you absolutely have to have sequentially increasing numbers with no gaps, you'll need to use a single entity, which you update in a transaction to 'consume' each new number. You'll be limited, in practice, to about 1-5 numbers generated per second - which sounds like it'll be fine for your requirements.
If you drop the requirement that IDs must be strictly sequential, you can use a hierarchical allocation scheme. The basic idea/limitation is that transactions must not affect multiple storage groups.
For example, assuming you have the notion of "users", you can allocate a storage group for each user (creating some global object per user). Each user has a list of reserved IDs. When allocating an ID for a user, pick a reserved one (in a transaction). If no IDs are left, make a new transaction allocating 100 IDs (say) from the global pool, then make a new transaction to add them to the user and simultaneously withdraw one. Assuming each user interacts with the application only sequentially, there will be no concurrency on the user objects.
The gaetk - Google AppEngine Toolkit now comes with a simple library function to get a number in a sequence. It is based on Nick Johnson's transactional approach and can be used quite easily as a foundation for Martin von Löwis' sharding approach:
>>> from gaeth.sequences import *
>>> init_sequence('invoce_number', start=1, end=0xffffffff)
>>> get_numbers('invoce_number', 2)
[1, 2]
The functionality is basically implemented like this:
def _get_numbers_helper(keys, needed):
results = []
for key in keys:
seq = db.get(key)
start = seq.current or seq.start
end = seq.end
avail = end - start
consumed = needed
if avail <= needed:
seq.active = False
consumed = avail
seq.current = start + consumed
seq.put()
results += range(start, start + consumed)
needed -= consumed
if needed == 0:
return results
raise RuntimeError('Not enough sequence space to allocate %d numbers.' % needed)
def get_numbers(needed):
query = gaetkSequence.all(keys_only=True).filter('active = ', True)
return db.run_in_transaction(_get_numbers_helper, query.fetch(5), needed)
If you aren't too strict on the sequential, you can "shard" your incrementer. This could be thought of as an "eventually sequential" counter.
Basically, you have one entity that is the "master" count. Then you have a number of entities (based on the load you need to handle) that have their own counters. These shards reserve chunks of ids from the master and serve out from their range until they run out of values.
Quick algorithm:
You need to get an ID.
Pick a shard at random.
If the shard's start is less than its end, take it's start and increment it.
If the shard's start is equal to (or more oh-oh) its end, go to the master, take the value and add an amount n to it. Set the shards start to the retrieved value plus one and end to the retrieved plus n.
This can scale quite well, however, the amount you can be out by is the number of shards multiplied by your n value. If you want your records to appear to go up this will probably work, but if you want to have them represent order it won't be accurate. It is also important to note that the latest values may have holes, so if you are using that to scan for some reason you will have to mind the gaps.
Edit
I needed this for my app (that was why I was searching the question :P ) so I have implemented my solution. It can grab single IDs as well as efficiently grab batches. I have tested it in a controlled environment (on appengine) and it performed very well. You can find the code on github.
Take a look at how the sharded counters are made. It may help you. Also do you really need them to be numeric. If unique is satisfying just use the entity keys.
Alternatively, you could use allocate_ids(), as people have suggested, then creating these entities up front (i.e. with placeholder property values).
first, last = MyModel.allocate_ids(1000000)
keys = [Key(MyModel, id) for id in range(first, last+1)]
Then, when creating a new invoice, your code could run through these entries to find the one with the lowest ID such that the placeholder properties have not yet been overwritten with real data.
I haven't put that into practice, but seems like it should work in theory, most likely with the same limitations people have already mentioned.
Remember: Sharding increases the probability that you will get a unique, auto-increment value, but does not guarantee it. Please take Nick's advice if you MUST have a unique auto-incrment.
I implemented something very simplistic for my blog, which increments an IntegerProperty, iden rather than the Key ID.
I define max_iden() to find the maximum iden integer currently being used. This function scans through all existing blog posts.
def max_iden():
max_entity = Post.gql("order by iden desc").get()
if max_entity:
return max_entity.iden
return 1000 # If this is the very first entry, start at number 1000
Then, when creating a new blog post, I assign it an iden property of max_iden() + 1
new_iden = max_iden() + 1
p = Post(parent=blog_key(), header=header, body=body, iden=new_iden)
p.put()
I wonder if you might also want to add some sort of verification function after this, i.e. to ensure the max_iden() has now incremented, before moving onto the next invoice.
Altogether: fragile, inefficient code.
I'm thinking in using the following solution: use CloudSQL (MySQL) to insert the records and assign the sequential ID (maybe with a Task Queue), later (using a Cron Task) move the records from CloudSQL back to the Datastore.
The entities also can have a UUID, so we can map the entities from the Datastore in CloudSQL, and also have the sequential ID (for legal reasons).

How to implement "autoincrement" on Google AppEngine

I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like.
A number MUST NOT BE used twice
Every number SHOULD BE used when exactly all smaller numbers have been used (no holes).
Fancy way of saying: I need to count 1,2,3,4 ...
The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day.
I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering".
Can this be implemented on Google AppEngine (preferably in Python)?
If you absolutely have to have sequentially increasing numbers with no gaps, you'll need to use a single entity, which you update in a transaction to 'consume' each new number. You'll be limited, in practice, to about 1-5 numbers generated per second - which sounds like it'll be fine for your requirements.
If you drop the requirement that IDs must be strictly sequential, you can use a hierarchical allocation scheme. The basic idea/limitation is that transactions must not affect multiple storage groups.
For example, assuming you have the notion of "users", you can allocate a storage group for each user (creating some global object per user). Each user has a list of reserved IDs. When allocating an ID for a user, pick a reserved one (in a transaction). If no IDs are left, make a new transaction allocating 100 IDs (say) from the global pool, then make a new transaction to add them to the user and simultaneously withdraw one. Assuming each user interacts with the application only sequentially, there will be no concurrency on the user objects.
The gaetk - Google AppEngine Toolkit now comes with a simple library function to get a number in a sequence. It is based on Nick Johnson's transactional approach and can be used quite easily as a foundation for Martin von Löwis' sharding approach:
>>> from gaeth.sequences import *
>>> init_sequence('invoce_number', start=1, end=0xffffffff)
>>> get_numbers('invoce_number', 2)
[1, 2]
The functionality is basically implemented like this:
def _get_numbers_helper(keys, needed):
results = []
for key in keys:
seq = db.get(key)
start = seq.current or seq.start
end = seq.end
avail = end - start
consumed = needed
if avail <= needed:
seq.active = False
consumed = avail
seq.current = start + consumed
seq.put()
results += range(start, start + consumed)
needed -= consumed
if needed == 0:
return results
raise RuntimeError('Not enough sequence space to allocate %d numbers.' % needed)
def get_numbers(needed):
query = gaetkSequence.all(keys_only=True).filter('active = ', True)
return db.run_in_transaction(_get_numbers_helper, query.fetch(5), needed)
If you aren't too strict on the sequential, you can "shard" your incrementer. This could be thought of as an "eventually sequential" counter.
Basically, you have one entity that is the "master" count. Then you have a number of entities (based on the load you need to handle) that have their own counters. These shards reserve chunks of ids from the master and serve out from their range until they run out of values.
Quick algorithm:
You need to get an ID.
Pick a shard at random.
If the shard's start is less than its end, take it's start and increment it.
If the shard's start is equal to (or more oh-oh) its end, go to the master, take the value and add an amount n to it. Set the shards start to the retrieved value plus one and end to the retrieved plus n.
This can scale quite well, however, the amount you can be out by is the number of shards multiplied by your n value. If you want your records to appear to go up this will probably work, but if you want to have them represent order it won't be accurate. It is also important to note that the latest values may have holes, so if you are using that to scan for some reason you will have to mind the gaps.
Edit
I needed this for my app (that was why I was searching the question :P ) so I have implemented my solution. It can grab single IDs as well as efficiently grab batches. I have tested it in a controlled environment (on appengine) and it performed very well. You can find the code on github.
Take a look at how the sharded counters are made. It may help you. Also do you really need them to be numeric. If unique is satisfying just use the entity keys.
Alternatively, you could use allocate_ids(), as people have suggested, then creating these entities up front (i.e. with placeholder property values).
first, last = MyModel.allocate_ids(1000000)
keys = [Key(MyModel, id) for id in range(first, last+1)]
Then, when creating a new invoice, your code could run through these entries to find the one with the lowest ID such that the placeholder properties have not yet been overwritten with real data.
I haven't put that into practice, but seems like it should work in theory, most likely with the same limitations people have already mentioned.
Remember: Sharding increases the probability that you will get a unique, auto-increment value, but does not guarantee it. Please take Nick's advice if you MUST have a unique auto-incrment.
I implemented something very simplistic for my blog, which increments an IntegerProperty, iden rather than the Key ID.
I define max_iden() to find the maximum iden integer currently being used. This function scans through all existing blog posts.
def max_iden():
max_entity = Post.gql("order by iden desc").get()
if max_entity:
return max_entity.iden
return 1000 # If this is the very first entry, start at number 1000
Then, when creating a new blog post, I assign it an iden property of max_iden() + 1
new_iden = max_iden() + 1
p = Post(parent=blog_key(), header=header, body=body, iden=new_iden)
p.put()
I wonder if you might also want to add some sort of verification function after this, i.e. to ensure the max_iden() has now incremented, before moving onto the next invoice.
Altogether: fragile, inefficient code.
I'm thinking in using the following solution: use CloudSQL (MySQL) to insert the records and assign the sequential ID (maybe with a Task Queue), later (using a Cron Task) move the records from CloudSQL back to the Datastore.
The entities also can have a UUID, so we can map the entities from the Datastore in CloudSQL, and also have the sequential ID (for legal reasons).

How to Sort Arrays in Dictionary?

I'm currently writing a program in Python to track statistics on video games. An example of the dictionary I'm using to track the scores :
ten = 1
sec = 9
fir = 10
thi5 = 6
sec5 = 8
games = {
'adom': [ten+fir+sec+sec5, "Ancient Domain of Mysteries"],
'nethack': [fir+fir+fir+sec+thi5, "Nethack"]
}
Right now, I'm going about this the hard way, and making a big long list of nested ifs, but I don't think that's the proper way to go about it. I was trying to figure out a way to sort the dictionary, via the arrays, and then, finding a way to display the first ten that pop up... instead of having to work deep in the if statements.
So... basically, my question is : Do you have any ideas that I could use to about making this easier, instead of wayyyy, way harder?
===== EDIT ====
the ten+fir produces numbers. I want to find a way to go about sorting the lists (I lack the knowledge of proper terminology) to go by the number (basically, whichever ones have the highest number in the first part of the array go first.
Here's an example of my current way of going about it (though, it's incomplete, due to it being very tiresome : Example Nests (paste2) (let's try this one?)
==== SECOND EDIT ====
In case someone doesn't see my comment below :
ten, fir, et cetera - these are just variables for scores. Basically, it goes from a top ten list into a variable number.
ten = 1, nin = 2, fir = 10, fir5 = 10, sec5 = 8, sec = 9...
so : 'adom': [ten+fir+sec+sec5, "Ancient Domain of Mysteries"] actually registers as : 'adom': [1+10+9+8, "Ancient Domain of Mysteries"] , which ends up looking like :
'adom': [28, "Ancient Domain of Mysteries"]
So, basically, if I ended up doing the "top two" out of my example, it'd be :
((1)) Nethack (48)
((2)) ADOM (28)
I'd write an actual number, but I'm thinking of changing a few things up, so the numbers might be a touch different, and I wouldn't want to rewrite it.
== THIRD (AND HOPEFULLY THE FINAL) EDIT ==
Fixed my original code example.
How about something like this:
scores = games.items()
scores.sort(key = lambda key, value: value[0])
return scores[:10]
This will return the first 10 items, sorted by the first item in the array.
I'm not sure if this is what you want though, please update the question (and fix the example link) if you need something else...
import heapq
return heapq.nlargest(10, games.iteritems(), key=lambda k, v: v[0])
is the most direct way to get the top ten key / value pairs, sorted by the first item of each "value" list. If you can define more precisely what output you want (just the names, the name / value pairs, or what else?) and the sorting criterion, this is easy to adjust, of course.
Wim's solution is good, but I'd say that you should probably go the extra mile and push this work off onto a database, rather than relying on Python. Python interfaces well with most types of databases, where much of what you're exploring is already a solved problem.
For example, instead of worrying about shifting your dictionaries to various other data types in order to properly sort them, you can simply get all the data for each pertinent entry pre-sorted based on the criteria of your query. There goes the need for convoluted sorting and resorting right there.
While dictionaries are tempting to use, because they give the illusion of database-like abilities to access data based on its attributes, I still think they stumble quite a bit with respect to implementation. I don't really have any numbers to throw at you, but just from personal experience, anything you do on Python when it comes to manipulating large amounts of data, you can do much faster and more efficient both in code and computation with something like MySQL.
I'm not sure what you have planned as far as the structure of your data goes, but along with adding data, changing its structure is a lot easier using a database, too.

Categories

Resources