memory management for dictionary in python - python

I have following code , i don't understand the scenario behind this please any one can explain.
import sys
data={}
print sys.getsizeof(data)
######output is 280
data={ 1:2,2:1,3:2,4:5,5:5,6:6,7:7,8:8,9:9,0:0,11:11,12:12,13:13,14:14,15:15}
print sys.getsizeof(data)
######output is 1816
data={1:2,2:1,3:2,4:5,5:5,6:6,7:7,8:8,9:9,0:0,11:11,12:12,13:13,14:14,15:15,16:16}
print sys.getsizeof(data)
##### output is 1048
if we increase the len of dictionary then it should increase on size in memory but it decreases why ?

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.
Windows x64 - If did like below:
data={ 1:2,2:1,3:2,4:5,5:5,6:6,7:7,8:8,9:9,0:0,11:11,12:12,13:13,14:14,15:15}
print sys.getsizeof(data)
print data
data[16]=16
print sys.getsizeof(data)
print data
printed:
1808
{0: 0, 1: 2, 2: 1, 3: 2, 4: 5, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15}
1808
{0: 0, 1: 2, 2: 1, 3: 2, 4: 5, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16}
But I indeed noticed same behavior when rewriting data dictionary as you mentioned:
272 #empty data dict
1808 # 15 elements in data dict
1040 # 16 elements in data dict

Related

Pandas Dataframe: search for specific columns, delete rows within the columns that equal a certain value

I'm trying to write a piece of code and I keep getting stuck with an issue of trying to search for a list of columns in a dataframe, which is not allowed because the list is unhashable.
Essentially, I have a sequence: 'KGTLPK'
I want to first locate every instance of 'K' in the sequence: [0,5]
Then I want to search columns 0 and 5 of my DataFrame for a specific value: 80
I want to get a list of rows that have '80' in columns 0 & 5, and delete those rows.
query = 'KGTLPK'
AA = 'K'
x = []
for pos,char in enumerate(query):
if(char == AA):
x.append(pos)
print(x)
#pddf = my DataFrame
df2 = pddf.filter(regex=x)
print(df2)
rows_removal = list(pddf.loc[pddf[df2] == Phosphorylation].index.tolist())
print(rows_removal)
pddf.drop(pddf.index[rows_removal])
My full DataFrame has 8855 rows, and this needs to decrease as improper values are identified. In this example I deleted all rows where column 0 was not equal to 16. I just need an easier way to do this so I don't hardcode everything.
pddf.head(10).to_dict() output, dataFrame unedited:
{0: {0: 16, 1: 16, 2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7: 16, 8: 16, 9: 16}, 1: {0: 16, 1: 16, 2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7: 16, 8: 16, 9: 16}, 2: {0: 16, 1: 16, 2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7: 16, 8: 16, 9: 16}, 3: {0: 16, 1: 44, 2: 80, 3: 42, 4: 71, 5: 28, 6: 14, 7: 28, 8: 42, 9: 81}}
This extends for 8000 rows.
This is as far as I've gotten. My goal for example would be to say something like "for every instance of '42' in column 3, delete that row"
Any help I can get is greatly appreciated!

Python - speed up iteration between nested dictionary

I want ask you for some advice to speed up my code. I know that you can see many of mistakes, but I need your knowledge and help where problem lies and how I can improve this code.
Background - what application creates:
Use OpenPyXL
Open excel file, read data and put it into nested dictionary:
2a. 1st level for rows
2b. 2nd level for items
Example:
{1: '#5C\Qopen#', 2: '20386239', 3: '3000133215', 4: 'RA', 5: None,
6: 'Vendor2', 7: 'IM45', 8: '#FR\QNot due#', 9: None, 10: None, 11:
'E1', 12: 'DNS', 13: datetime.datetime(2019, 12, 27, 0, 0), 14:
datetime.datetime(2019, 12, 26, 0, 0), 15: -21501, 16: 'GBP', 17:
-21501, 18: 'GBP', 19: datetime.datetime(2019, 12, 26, 0, 0), 20: datetime.datetime(2020, 2, 9, 0, 0)}
{2: '#5C\Qopen#', 2: '20386239',
3: '3000133215', 4: 'RA', 5: None, 6: 'Vendor1', 7: 'IM45', 8:
'#FR\QNot due#', 9: None, 10: None, 11: 'E1', 12: 'DNS', 13:
datetime.datetime(2019, 12, 27, 0, 0), 14: datetime.datetime(2019, 12,
26, 0, 0), 15: -21501, 16: 'GBP', 17: -21501, 18: 'GBP', 19:
datetime.datetime(2019, 12, 26, 0, 0), 20: datetime.datetime(2020, 2,
9, 0, 0)}
{3: '#5C\Qopen#', 2: '20386239',
3: '3000133215', 4: 'RA', 5: None, 6: 'Vendor1', 7: 'IM45', 8:
'#FR\QNot due#', 9: None, 10: None, 11: 'E1', 12: 'DNS', 13:
datetime.datetime(2019, 12, 27, 0, 0), 14: datetime.datetime(2019, 12,
26, 0, 0), 15: -21501, 16: 'EUR', 17: -21501, 18: 'GBP', 19:
datetime.datetime(2019, 12, 26, 0, 0), 20: datetime.datetime(2020, 2,
9, 0, 0)}
Script should look via whole data - comparing Vendor & Currency and look whether in particular vendor we have different currencies (I mean e.g. when Vendor 1 doesn't have 100% one specific currency [GBP or other]
When this will happened - put text "something something" for example in column "17" in the same row where different currency is
Mainly my code works properly but its very slow. I mean when I need compare in the same time file with 30 000 rows.
Do you know how I can improve it?
Thank you
next_row2 = 1
numerkolumny = 1
nastepny = 1
numer_vendora = 1
ilosc_gbp = 0
ilosc_inne = 0
linijkadanych = {}
lista_vendorow = {}
for zmienna2 in progressbar.progressbar(assets2, redirect_stdout=True):
for iteracja in assets2:
if assets2[zmienna2][6] not in lista_vendorow.values():
if nastepny < len(assets2):
if assets2[zmienna2][6] == assets2[nastepny+1][6]:
if assets2[nastepny+1][16] == "GBP": # JESLI ZNALAZLES GBP, POLICZ DO GBP
ilosc_gbp = ilosc_gbp + 1
nastepny = nastepny + 1
else: # JESLI ZNALAZLES INNA WALUTE, POLICZ DO INNEJWALUTY
ilosc_inne = ilosc_inne + 1
nastepny = nastepny + 1
else:
nastepny = nastepny + 1
if nastepny >= len(assets2): # JESLI PRZEITEROWALES PRZEZ WSZYSTKIE WIERSZE, OBLICZ WYNIK
suma_walut = ilosc_gbp + ilosc_inne # SUMUJ WSZYSTKIE WALUTY
# JESLI ZNAJDZIE ODCHYLELNIA - RAPORTUJ!
if (suma_walut != ilosc_gbp) and (suma_walut != ilosc_inne):
for waluty in assets2: # nr wiersza
for waluty2 in assets2[waluty]: # nr kolumny
if assets2[waluty][6] == assets2[zmienna2][6]:
if ilosc_gbp > ilosc_inne:
result_tab.cell(column=17, row=waluty+1, value="Waluta other than GBP. Check!").font = style_blad_bold
else:
result_tab.cell(column=17, row=waluty+1, value="Other currencies between GBP!. Check!").font = style_blad_bold
lista_vendorow[numer_vendora] = assets2[zmienna2][6]
ilosc_gbp = 0 # ZERUJ ZMIENNE, LICZYMY NOWEGO VENDORA
ilosc_inne = 0 # ZERUJ ZMIENNE, LICZYMY NOWEGO VENDORA
nastepny = 1 # ZERUJ ZMIENNE, LICZYMY NOWEGO VENDORA
numer_vendora = numer_vendora + 1

Converting Python dictionary to Pandas Data Frame

I have a python dictionary,dict1 where keys are integer & values are string. I want to create a pandas data frame with this dictionary.Can you please suggest me how to do that in python 3.X?
I used following code
df_i=pd.DataFrame(dict1,columns=['num_int','num_str'])
But got error message
ValueError: If using all scalar values, you must pass an index
I also tried the steps mentioned in here but I got output like
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
for all the rows
My sample dictionary looks like
1: '031155233',
2: '031420399',
3: '031442593',
4: '032952341',
5: '033141410',
6: '033404365',
7: '033423523',
8: '033461055',
9: '012025663',
10: '012322156',
11: '021422395',
12: '036145459',
13: '035910162',
14: '042144641',
15: '042525232',
16: '040535923',
17: '042523604',
18: '029090230',
19: '012402315',
I think you should just add orient = 'index' when calling pd.DataFrame.from_dict()
my_dict = {1: '031155233',
2: '031420399',
3: '031442593'}
>>> pd.DataFrame.from_dict(my_dict, orient='index')
0
1 031155233
2 031420399
3 031442593
And if you need to pass in the column names you mention, here's a way :
df = pd.DataFrame.from_dict(my_dict, orient='index').reset_index()
df.columns = ['num_int','num_str']
>>> print(df)
num_int num_str
0 1 031155233
1 2 031420399
2 3 031442593

Taking the mean by one column and then by another in pandas

I have the following dataset:
data = {'VALVE_SCORE': {0: 34.1,1: 41.0,2: 49.7,3: 53.8,4: 35.8,5: 49.2,6: 38.6,7: 51.2,8: 44.8,9: 51.5,10: 41.9,11: 46.0,12: 41.9,13: 51.4,14: 35.0,15: 49.7,16: 41.5,17: 51.5,18: 45.2,19: 53.4,20: 38.1,21: 50.2,22: 25.4,23: 30.0,24: 28.1,25: 49.9,26: 27.5,27: 37.2,28: 27.7,29: 45.7,30: 27.2,31: 30.0,32: 27.9,33: 34.3,34: 29.5,35: 34.5,36: 28.0,37: 33.6,38: 26.8,39: 31.8},
'DAY': {0: 6, 1: 6, 2: 6, 3: 6, 4: 13, 5: 13, 6: 13, 7: 13, 8: 20, 9: 20, 10: 20, 11: 20, 12: 27, 13: 27, 14: 27, 15: 27, 16: 3, 17: 3, 18: 3, 19: 3, 20: 10, 21: 10, 22: 10, 23: 10, 24: 17, 25: 17, 26: 17, 27: 17, 28: 24, 29: 24, 30: 24, 31: 24, 32: 3, 33: 3, 34: 3, 35: 3, 36: 10, 37: 10, 38: 10, 39: 10},
'MONTH': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1, 11: 1, 12: 1, 13: 1, 14: 1, 15: 1, 16: 2, 17: 2, 18: 2, 19: 2, 20: 2, 21: 2, 22: 2, 23: 2, 24: 2, 25: 2, 26: 2, 27: 2, 28: 2, 29: 2, 30: 2, 31: 2, 32: 3, 33: 3, 34: 3, 35: 3, 36: 3, 37: 3, 38: 3, 39: 3}}
df = pd.DataFrame(data)
First, I would like to take the mean by day and then by month. However, taking the mean by grouping the days results in decimal months. I would like to preserve the months before I do a groupby('MONTH').mean()
In [401]: df.groupby("DAY").mean()
Out[401]:
VALVE_SCORE MONTH
DAY
3 39.7250 2.5
6 44.6500 1.0
10 32.9875 2.5
13 43.7000 1.0
17 35.6750 2.0
20 46.0500 1.0
24 32.6500 2.0
27 44.5000 1.0
I would like the end result to be:
MONTH VALVE_SCORE
1 value
2 value
3 value
Consider with the data you have, you would like to have the daily mean and then the monthly mean. Putting the same in an Excel pivot table will result like this:
Do doing the same in pandas, grouping by months is enough to get the same result:
df.groupby(['MONTH']).mean()
DAY VALVE_SCORE
MONTH
1 16.5 44.7250
2 13.5 38.0375
3 6.5 30.8000
Since the month and day values are numeric, pandas process it, consider 'DAY' and 'MONTH' values are not numeric and are strings, you get this result:
VALVE_SCORE
MONTH
1 44.7250
2 38.0375
3 30.8000
So pandas already computes the daily means and using it computes the monthly means.
Here's a possible solution. Do let me know if there is a more efficient way of doing it.
df = pd.DataFrame(data)
months = list(df['MONTH'].unique())
frames = []
for p in months:
df_part = df[df['MONTH'] == p]
df_part_avg = df_part.groupby("DAY", as_index=False).mean()
df_part_avg = df_part_avg.drop('DAY', axis=1)
frames.append(df_part_avg)
df_months = pd.concat(frames)
df_final = df_months.groupby("MONTH", as_index=False).mean()
And the result is:
In [430]: df_final
Out[430]:
MONTH VALVE_SCORE
0 1 44.7250
1 2 38.0375
2 3 30.8000

Chaining generators within a comprehension

Is it possible to do something like the following as a one liner in python where the resulting syntax is readable?
d = dict((i,i+1) for i in range(10))
d.update((i,i+2) for i in range(20,25))
>>> from itertools import chain
>>> dict(chain(((i,i+1) for i in range(10)),
((i,i+2) for i in range(20,25))))
{0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 20: 22, 21: 23, 22: 24, 23: 25, 24: 26}
how about this:
d = dict(dict((i,i+1) for i in range(10)), **dict(((i,i+2) for i in range(20,25))))
result:
{0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 20: 22, 21: 23, 22: 24, 23: 25, 24: 26}
#jamylak's answer is great and should do. Anyway for this specific problem, I would probably do this:
d = dict((i, i+1) if i < 10 else (i, i+2) for i in range(25) if i < 10 or i >= 20)
This gives the same output:
d = dict((i,i+x) for x,y in [(1, range(10)), (2, range(20,25))] for i in y)
You could also write it with enumerate, so:
d = dict((i,i+x) for x,y in enumerate([range(10), range(20,25)], 1) for i in y)
But it's slightly longer and it assumes your intention is to use a smooth incrementation, which might not be the case later (?). The problem is not knowing whether you plan to extend this into an even longer expression, which would alter the requirements and affect which answer is most convenient.

Categories

Resources