error creating python table using Agate library - python

I am using the Agate library to create a table.
Using the command as :
table = agate.Table(cpi_rows, cpi_types, cpi_titles)
Sample values are as below :
cpi_rows[0]
[1.0,'Denmark','DNK',128.0,'EU',1.0,91.0,7.0,2.2,87.0,95.0,83.0,98.0,0.0,97.0,0.0,96.0,98.0,0.0,87.0,89.0,88.0,83.0,0.0,0.0,0.0]
cpi_tiles
['Country Rank','Country / Territory','WB Code','IFS Code','Region','Country Rank','CPI 2013 Score', 'Surveys Used','Standard Error', '90% Confidence interval Lower', 'Upper','Scores range MIN','MAX','Data sources AFDB','BF (SGI)','BF (BTI)','IMD','ICRG','WB','WEF','WJP','EIU','GI','PERC','TI','FH']
When I run the command, I am getting the error as :
ValueError: Column names must be strings or None.
Though all the names in cpi_titles are type strings only, I am unable to get the cause for error.

Just tried your code, and apart from a few corrections to names and stuff this worked without a problem
cpi_rows = [[]]
cpi_rows[0] =[1.0,'Denmark','DNK',128.0,'EU',1.0,91.0,7.0,2.2,87.0,95.0,83.0,98.0,0.0,97.0,0.0,96.0,98.0,0.0,87.0,89.0,88.0,83.0,0.0,0.0,0.0]
cpi_titles = ['Country Rank','Country / Territory','WB Code','IFS Code','Region','Country Rank','CPI 2013 Score', 'Surveys Used','Standard Error', '90% Confidence interval Lower', 'Upper','Scores range MIN','MAX','Data sources AFDB','BF (SGI)','BF (BTI)','IMD','ICRG','WB','WEF','WJP','EIU','GI','PERC','TI','FH']
table = agate.Table(cpi_rows, cpi_titles)
print table.print_structure()
The output is
| column | data_type |
| ----------------------------- | --------- |
| Country Rank | Number |
| Country / Territory | Text |
| WB Code | Text |
| IFS Code | Number |
| Region | Text |
| Country Rank_2 | Number |
| CPI 2013 Score | Number |
| Surveys Used | Number |
| Standard Error | Number |
| 90% Confidence interval Lower | Number |
| Upper | Number |
| Scores range MIN | Number |
| MAX | Number |
| Data sources AFDB | Number |
| BF (SGI) | Number |
| BF (BTI) | Number |
| IMD | Number |
| ICRG | Number |
| WB | Number |
| WEF | Number |
| WJP | Number |
| EIU | Number |
| GI | Number |
| PERC | Number |
| TI | Number |
| FH | Number |
Obviously, I don't have your definition of types which you want to apply to this data. The only other thing to note is that you have defined Country Rank twice in your column titles so Agate does warn you about this and relabel it.

Related

Pandas: How do I read specific columns in a file and make them into a new csv

Here is sample 1 :
| district_id | date |
| -------- | ----------- |
| 18 | 1995-03-24 |
| 1 | 1993-02-26 |
Sample 2:
| link_id | type |
| -------- | ----------- |
| 9 | gold |
| 19 | classic |
I want to gather sample 1's date column and sample 2's type column and output them as data.csv
You can use vertical concatenation of dataframes and then render it:
df3 = pandas.concat([df1['date'], df2['type']], axis = 1)
df3.to_csv('data.csv')

Data frame with Hidden column values

The data frame consists of column 'value' which has some hidden characters.
When I write the data frame to PostgreSQL I get the below error
ValueError: A string literal cannot contain NUL (0x00) characters.
I some how found the cause of error. Refer table below (missing column value
| | datetime | mc | tagname | value | quality |
|-------|--------------------------|----|---------|------------|---------|
| 19229 | 16-12-2021 02:31:29.083 | L | VIN | | 192 |
| 19230 | 16-12-2021 02:35:28.257 | L | VIN | C4A 173026 | 192 |
Checked the length of string- it was same 10 character like below rows
df.value.str.len()
Requirement:
I want to replace that empty area with text 'miss', i tried different method in pandas. I'm not able to do.
df['value'] = df['value'].str.replace(r"[\"\',]", '')<br />
df.replace('\'','', regex=True, inplace=True)
| | datetime | mc | tagname | value | quality |
|-------|--------------------------|----|---------|------------|---------|
| 19229 | 16-12-2021 02:31:29.083 | L | VIN | miss | 192 |
| 19230 | 16-12-2021 02:35:28.257 | L | VIN | C4A 173026 | 192 |
Try this:
df['value'] = df['value'].str.replace(r'[\x00-\x19]', '').replace('', 'miss')

Unstack (pivot?) dataframe in Pandas

I have a dataframe somewhat like this:
ID | Relationship | First Name | Last Name | DOB | Address | Phone
0 | 2 | Self | Vegeta | Saiyan | 01/01/1949 | Saiyan Planet | 123-456-7891
1 | 2 | Spouse | Bulma | Saiyan | 04/20/1969 | Saiyan Planet | 123-456-7891
2 | 3 | Self | Krilin | Human | 08/21/1992 | Planet Earth | 789-456-4321
3 | 4 | Self | Goku | Kakarot | 05/04/1975 | Planet Earth | 321-654-9870
4 | 4 | Child | Gohan | Kakarot | 04/02/2001 | Planet Earth | 321-654-9870
5 | 5 | Self | Freezer | Fridge | 09/15/1955 | Deep Space | 456-788-9568
I'm looking to have the rows with same ID appended to the right of the first row with that ID.
Example:
ID | Relationship | First Name | Last Name | DOB | Address | Phone | Spouse_First Name | Spouse_Last Name | Spouse_DOB | Child_First Name | Child_Last Name | Child_DOB |
0 | 2 | Self | Vegeta | Saiyan | 01/01/1949 | Saiyan Planet | 123-456-7891 | Bulma | Saiyan | 04/20/1969 | | |
1 | 3 | Self | Krilin | Human | 08/21/1992 | Planet Earth | 789-456-4321 | | | | | |
2 | 4 | Self | Goku | Kakarot | 05/04/1975 | Planet Earth | 321-654-9870 | | | | Gohan | Kakarot | 04/02/2001 |
3 | 5 | Self | Freezer | Fridge | 09/15/1955 | Deep Space | 456-788-9568 | | | | | |
My real scenario dataframe has more columns, but they all have the same information when the two rows share the same ID, so no need to duplicate those in the other rows. I only need to add to the right the columns that I choose, which in this case would be First Name, Last Name, DOB with the identifier for the new column label depending on what's on the 'Relationship' column (I can rename them later if it's not possible to do in a straight way, just wanted to illustrate my point.
Now that I've said this, I want to add that I have tried different ways and seems like approaching with unstack or pivot is the way to go but I have not been successful in making it work.
Any help would be greatly appreciated.
This solution assumes that the DataFrame is indexed by the ID column.
not_self = (
df.query("Relationship != 'Self'")
.pivot(columns='Relationship')
.swaplevel(axis=1)
.reindex(
pd.MultiIndex.from_product(
(
set(df['Relationship'].unique()) - {'Self'},
df.columns.to_series().drop('Relationship')
)
),
axis=1
)
)
not_self.columns = [' '.join((a, b)) for a, b in not_self.columns]
result = df.query("Relationship == 'Self'").join(not_self)
Please let me know if this is not what was wanted.

vlookup on text field using pandas

I need to use vlookup functionality in pandas.
DataFrame 2: (FEED_NAME has no duplicate rows)
+-----------+--------------------+---------------------+
| FEED_NAME | Name | Source |
+-----------+--------------------+---------------------+
| DMSN | DMSN_YYYYMMDD.txt | Main hub |
| PCSUS | PCSUS_YYYYMMDD.txt | Basement |
| DAMJ | DAMJ_YYYYMMDD.txt | Effiel Tower router |
+-----------+--------------------+---------------------+
DataFrame 1:
+-------------+
| SYSTEM_NAME |
+-------------+
| DMSN |
| PCSUS |
| DAMJ |
| : |
| : |
+-------------+
DataFrame 1 contains lot more number of rows. It is acutally a column in much larger table. I need to merger df1 with df2 to make it look like:
+-------------+--------------------+---------------------+
| SYSTEM_NAME | Name | Source |
+-------------+--------------------+---------------------+
| DMSN | DMSN_YYYYMMDD.txt | Main Hub |
| PCSUS | PCSUS_YYYYMMDD.txt | Basement |
| DAMJ | DAMJ_YYYYMMDD.txt | Eiffel Tower router |
| : | | |
| : | | |
+-------------+--------------------+---------------------+
in excel I just would have done VLOOKUP(,,1,TRUE) and then copied the same across all cells.
I have tried various combinations with merge and join but I keep getting KeyError:'SYSTEM_NAME'
Code:
_df1 = df1[["SYSTEM_NAME"]]
_df2 = df2[['FEED_NAME','Name','Source']]
_df2.rename(columns = {'FEED_NAME':"SYSTEM_NAME"})
_df3 = pd.merge(_df1,_df2,how='left',on='SYSTEM_NAME')
_df3.head()
You missed the inplace=True argument in the line _df2.rename(columns = {'FEED_NAME':"SYSTEM_NAME"}) so the _df2 columns name haven't changed. Try this instead :
_df1 = df1[["SYSTEM_NAME"]]
_df2 = df2[['FEED_NAME','Name','Source']]
_df2.rename(columns = {'FEED_NAME':"SYSTEM_NAME"}, inplace=True)
_df3 = pd.merge(_df1,_df2,how='left',on='SYSTEM_NAME')
_df3.head()

Maximizing a combination of a series of values

This is a complicated one, but I suspect there's some principle I can apply to make it simple - I just don't know what it is.
I need to parcel out presentation slots to a class full of students for the semester. There are multiple possible dates, and multiple presentation types. I conducted a survey where students could rank their interest in the different topics. What I'd like to do is get the best (or at least a good) distribution of presentation slots to students.
So, what I have:
List of 12 dates
List of 18 students
CSV file where each student (row) has a rating 1-5 for each date
What I'd like to get:
Each student should have one of presentation type A (intro), one of presentation type B (figures) and 3 of presentation type C (aims)
Each date should have at least 1 of each type of presentation
Each date should have no more than 2 of type A or type B
Try to give students presentations that they rated highly (4 or 5)
I should note that I realize this looks like a homework problem, but it's real life :-). I was thinking that I might make a Student class for each student that contains the dates for each presentation type, but I wasn't sure what the best way to populate it would be. Actually, I'm not even sure where to start.
TL;DR: I think you're giving your students too much choice :D
But I had a shot at this problem anyway. Pretty fun exercise actually, although some of the constraints were a little vague. Most of all, I had to guess what the actual students' preference distribution would look like. I went with uniformly distributed, independent variables, although that's probably not very realistic. Still I think it should work just as well on real data as it does on my randomly generated data.
I considered brute forcing it, but a rough analysis gave me an estimate of over 10^65 possible configurations. That's kind of a lot. And since we don't have a trillion trillion years to consider all of them, we'll need a heuristic approach.
Because of the size of the problem, I tried to avoid doing any backtracking. But this meant that you could get stuck; there might not be a solution where everyone only gets dates they gave 4's and 5's.
I ended up implementing a double-edged Iterative Deepening-like search, where both the best case we're still holding out hope for (i.e., assign students to a date they gave a 5) and the worst case scenario we're willing to accept (some student might have to live with a 3) are gradually lowered until a solution is found. If we get stuck, reset, lower expectations, and try again. Tasks A and B are assigned first, and C is done only after A and B are complete, because the constraints on C are far less stringent.
I also used a weighting factor to model the trade off between maximizing students happiness with satisfying the types-of-presentations-per-day limits.
Currently it seems to find a solution for pretty much every random generated set of preferences. I included an evaluation metric; the ratio between the sum of the preference values of all assigned student/date combos, and the sum of all student ideal/top 3 preference values. For example, if student X had two fives, one four and the rest threes on his list, and is assigned to one of his fives and two threes, he gets 5+3+3=11 but could ideally have gotten 5+5+4=14; he is 11/14 = 78.6% satisfied.
After some testing, it seems that my implementation tends to produce an average student satisfaction of around 95%, at lot better than I expected :) But again, that is with fake data. Real preferences are probably more clumped, and harder to satisfy.
Below is the core of the algorihtm. The full script is ~250 lines and a bit too long for here I think. Check it out at Github.
...
# Assign a date for a given task to each student,
# preferring a date that they like and is still free.
def fill(task, lowest_acceptable, spread_weight=0.1, tasks_to_spread="ABC"):
random_order = range(nStudents) # randomize student order, so everyone
random.shuffle(random_order) # has an equal chance to get their first pick
for i in random_order:
student = students[i]
if student.dates[task]: # student is already assigned for this task?
continue
# get available dates ordered by preference and how fully booked they are
preferred = get_favorite_day(student, lowest_acceptable,
spread_weight, tasks_to_spread)
for date_nr in preferred:
date = dates[date_nr]
if date.is_available(task, student.count, lowest_acceptable == 1):
date.set_student(task, student.count)
student.dates[task] = date
break
# attempt to "fill()" the schedule while gradually lowering expectations
start_at = 5
while start_at > 1:
lowest_acceptable = start_at
while lowest_acceptable > 0:
fill("A", lowest_acceptable, spread_weight, "AAB")
fill("B", lowest_acceptable, spread_weight, "ABB")
if lowest_acceptable == 1:
fill("C", lowest_acceptable, spread_weight_C, "C")
lowest_acceptable -= 1
And here is an example result as printed by the script:
Date
================================================================================
Student | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
================================================================================
1 | | A | B | | C | | | | | | | |
2 | | | | | A | | | | | B | C | |
3 | | | | | B | | | C | | A | | |
4 | | | | A | | C | | | | | | B |
5 | | | C | | | | A | B | | | | |
6 | | C | | | | | | | A | B | | |
7 | | | C | | | | | B | | | | A |
8 | | | A | | C | | B | | | | | |
9 | C | | | | | | | | A | | | B |
10 | A | B | | | | | | | C | | | |
11 | B | | | A | | C | | | | | | |
12 | | | | | | A | C | | | | B | |
13 | A | | | B | | | | | | | | C |
14 | | | | | B | | | | C | | A | |
15 | | | A | C | | B | | | | | | |
16 | | | | | | A | | | | C | B | |
17 | | A | | C | | | B | | | | | |
18 | | | | | | | C | A | B | | | |
================================================================================
Total student satisfaction: 250/261 = 95.00%

Categories

Resources