I have the following code, but when I execute it, it prints the age_groups_list 17 times,
Any idea why?
import pandas as pd
file = pd.read_csv(r"file location")
age_groups_list = []
for var in file[1:]:
age = file.iloc[:, 10]
age_groups_list.append(age)
print(age_groups_list)
the idea is that I have a csv file with 16,000 (+) rows and 20 columns, I am picking the age group from index 10, adding it to a list and then print the list, however when printing the list, it does it for 17 time, this image shows the end of the printing output.
Any idea what am I doing wrong here?
thanks
file.iloc[:,10] already gives you all the data you need the loop is useless
what you see is actually a list of lists.
change it to this:
import pandas as pd
file = pd.read_csv(r"file location")
age_groups_list = file.iloc[:, 10]
print(age_groups_list)
Related
so I do have this NumPy array result(final), and I want to reduce it, I mean, if the value is repeated, then I want to delete the first value and maintain the second,third value repeated and so on...
import hmac
import hashlib
import time
from argparse import _MutuallyExclusiveGroup
from tkinter import *
import pandas as pd
import base64
import matplotlib.pyplot as plt
import numpy as np
key="800070FF00FF08012"
key=bytes(key,'utf-8')
collision=[]
for x in range(1,1000001):
msg=bytes(f'{x}','utf-8')
digest = hmac.new(key, msg,"sha256").digest()
code = base64.b64encode(digest).decode('utf-8')
code=code[:6]
key=key.replace(key,digest)
collision.append(code)
df=pd.DataFrame(collision)
df=df[df.duplicated(keep=False)]
df_index=df.index.to_numpy()
df=df.values.flatten()
final=np.stack((df_index,df),axis=1)
Results of the variable "final":
I HAVE:
[[14093 'JRp1kX']
[43985 'KGlW7X']
[59212 'pU97Tr']
[90668 'ecTjTB']
[140615 'JRp1kX']
[218480 '25gtjT']
[344174 'dtXg6E']
[380467 'DdHQ3M']
[395699 'vnFw/c']
[503504 'dtXg6E']
[531073 'KGlW7X']
[633091 'ecTjTB']
[671091 'vnFw/c']
[672111 '25gtjT']
[785568 'pU97Tr']
[991540 'DdHQ3M']
[991548 'JRp1kX']]
And I WANT TO HAVE:
[[140615 'JRp1kX']
[503504 'dtXg6E']
[531073 'KGlW7X']
[633091 'ecTjTB']
[671091 'vnFw/c']
[672111 '25gtjT']
[785568 'pU97Tr']
[991540 'DdHQ3M']
[991548 'JRp1kX']]
Eliminating the first values that were repeated in the array.
Does someone have some code that could work for my case?
In more simple terms it would be, if you have this list [1,2,3,4,5,1,3,5,5]
I would like to have [2,4,1,3,5,5]
df = pd.DataFrame([1, 2, 3, 4, 5, 1, 3, 5, 5])
# keep the unique rows
unique_mask = ~df.duplicated(keep=False)
# keep the repeated rows (skipping the first for each non-unique)
repeated_mask = df.duplicated()
df.loc[unique_mask | repeated_mask]
0
1 2
3 4
5 1
6 3
7 5
8 5
final is a numpy array, so you can use np.unique on the second column to get the indices of the first occurrence and number of occurrences to avoid deleting single values
_, idx, counts = np.unique(final[:, 1], return_index=True, return_counts=True)
idx = idx[counts > 1]
final = np.delete(final, idx, axis=0)
This will work on the ndarray, for your second 1d array example use
_, idx, counts = np.unique(final, return_index=True, return_counts=True)
Maybe you could create for cycle.
to_remove = list()
for i in range(len(your_list)):
if your_list[i] in your_list[i:]:
to_remove.append(i)
removed_count = 0
for i in to_remove:
del your_list[i - removed_count]
removed_count += 1
You cannot del instantly in the first cycle because i is gonna iterate next number, which would lead to skipping number every time you delete one.
[i - removed_count] because every time you delete lower index then higher indexes gets instantly decreased by one.
I think it could be written in more effective way but this shoudl work, maybe with little changes.
After you generate df, add the following lines:
df=pd.DataFrame(collision)
# ... your code ends here
removed_already=[]
for idx in df[df.duplicated(keep=False)].index:
if df.loc[idx][0] not in removed_already:
removed_already.append(df.loc[idx][0])
df.drop(index=idx, inplace=True)
# your code continues
df_index=df.index.to_numpy()
df=df.values.flatten()
final=np.stack((df_index,df),axis=1)
I'm new to python but I need it for a personal project. And so I have this lump of code. The function is to create a table and update it as necessary. The problem is that the table keeps being overwritten and I don't know why. Also I'm struggling with correctly assigning the starting position of the new lines to append, and that's why total (ends up overwritten as well) and pos are there, but I haven't figured out how to correctly use them. Any tips?
import datetime
import pandas as pd
import numpy as np
total ={}
entryTable = pd.read_csv("Entry_Table.csv")
newEntries = int(input("How many new entries?\n"))
for i in range(newEntries):
ID = input ("ID?\n")
VQ = int (input ("VQ?\n"))
timeStamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
entryTable.loc[i] = [timeStamp, ID, VQ]
entryTable.to_csv("Inventory_Table.csv")
total[i] = 1
pos = sum(total.values())
print(pos)
inventoryTable = pd.read_csv("Inventory_Table.csv", index_col = 0)
Your variable 'i' runs from index 0 to the number of 'newEntries'. When you add new data to row 'i' in your Pandas dataframe, you are overwriting existing data in that row. If you want to add new data, try 'n+i' where n is the initial number of entries. You can determine n with either
n = len(entryTable)
or
n = entryTable.shape[0]
I have a huge set of data. Something like 100k lines and I am trying to drop a row from a dataframe if the row, which contains a list, contains a value from another dataframe. Here's a small time example.
has = [['#a'], ['#b'], ['#c, #d, #e, #f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
tweet user
0 [#a] 1
1 [#b] 2
2 [#c, #d, #e, #f] 3
3 [#g] 5
z
0 #d
1 #a
The desired outcome would be
tweet user
0 [#b] 2
1 [#g] 5
Things i've tried
#this seems to work for dropping #a but not #d
for a in range(df.tweet.size):
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a)
#this works for my small scale example but throws an error on my big data
df['tweet'] = df.tweet.apply(', '.join)
test = df[~df.tweet.str.contains('|'.join(df2['z'].astype(str)))]
#the error being "unterminated character set at position 1343770"
#i went to check what was on that line and it returned this
basket.iloc[1343770]
user_id 17060480
tweet [#IfTheyWereBlackOrBrownPeople, #WTF]
Name: 4612505, dtype: object
Any help would be greatly appreciated.
is ['#c, #d, #e, #f'] 1 string or a list like this ['#c', '#d', '#e', '#f'] ?
has = [['#a'], ['#b'], ['#c', '#d', '#e', '#f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
simple solution would be
screen = set(df2.z.tolist())
to_delete = list() # this will speed things up doing only 1 delete
for id, row in df.iterrows():
if set(row.tweet).intersection(screen):
to_delete.append(id)
df.drop(to_delete, inplace=True)
speed comparaison (for 10 000 rows):
st = time.time()
screen = set(df2.z.tolist())
to_delete = list()
for id, row in df.iterrows():
if set(row.tweet).intersection(screen):
to_delete.append(id)
df.drop(to_delete, inplace=True)
print(time.time()-st)
2.142000198364258
st = time.time()
for a in df.tweet.index:
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a, inplace=True)
break
print(time.time()-st)
43.99799990653992
For me, your code works if I make several adjustments.
First, you're missing the last line when putting range(df.tweet.size), either increase this or (more robust, if you don't have an increasing index), use df.tweet.index.
Second, you don't apply your dropping, use inplace=True for that.
Third, you have #d in a string, the following is not a list: '#c, #d, #e, #f' and you have to change it to a list so it works.
So if you change that, the following code works fine:
has = [['#a'], ['#b'], ['#c', '#d', '#e', '#f'], ['#g']]
use = [1,2,3,5]
z = ['#d','#a']
df = pd.DataFrame({'user': use, 'tweet': has})
df2 = pd.DataFrame({'z': z})
for a in df.tweet.index:
for search in df2.z:
if search in df.loc[a].tweet:
df.drop(a, inplace=True)
break # so if we already dropped it we no longer look whether we should drop this line
This will provide the desired result. Be aware of this potentially being not optimal due to missing vectorization.
EDIT:
you can achieve the string being a list with the following:
from itertools import chain
df.tweet = df.tweet.apply(lambda l: list(chain(*map(lambda lelem: lelem.split(","), l))))
This applies a function to each line (assuming each line contains a list with one or more elements): Split each element (should be a string) by comma into a new list and "flatten" all the lists in one line (if there are multiple) together.
EDIT2:
Yes, this is not really performant But basically does what was asked. Keep that in mind and after having it working, try to improve your code (less for iterations, do tricks like collecting the indices and then drop all of them).
I love you all. First time with Python, I am reading in a csv with 10842 cities and counting how many occurrences there are of each. When I print to terminal it outputs the first 29 cities, prints ... and then prints 10813 - 10842. This is the code:
import pandas as pd
df = pd.read_csv('Csz.csv')
s = df['City'].value_counts().rename('Total_City')
df = df.join(s, on='City')
print (df)
I'm a bit lost on how to get all of them to print, and hopefully after will figure out how to remove duplicates. Thank you for your help!
Put this in your code right after the imports
pd.options.display.max_rows = 999
see the doc for full explanation:
https://pandas.pydata.org/pandas-docs/stable/options.html
For a project for my lab, I'm analyzing Twitter data. The tweets we've captured all have the word 'sex' in them, that's the keyword we filtered the TwitterStreamer to capture based on.
I converted the CSV where all of the tweet data (json metatags) is housed into a pandas DB and saved the 'text' column to isolate the tweet text.
import pandas as pd
import csv
df = pd.read_csv('tweets_hiv.csv')
saved_column4 = df.text
print saved_column4
Out comes the correct output:
0 Some example tweet text
1 Oh hey look more tweet text #things I hate #stuff
...a bunch more lines
Name: text, Length: 8540, dtype: object
But, when I try this
from textblob import TextBlob
tweetstr = str(saved_column4)
tweets = TextBlob(tweetstr).upper()
print tweets.words.count('sex', case_sensitive=False)
My output is 22.
There should be AT LEAST as many incidences of the word 'sex' as there are lines in the CSV, and likely more. I can't figure out what's happening here. Is TextBlob not configuring right around a dtype:object?
I'm not entirely sure this is methodically correct insofar as language processing, but using join will give you the count you need.
import pandas as pd
from textblob import TextBlob
tweets = pd.Series('sex {}'.format(x) for x in range(1000))
tweetstr = " ".join(tweets.tolist())
tweetsb = TextBlob(tweetstr).upper()
print tweetsb.words.count('sex', case_sensitive=False)
# 1000
If you just need the count without necessarily using TextBlob, then just do:
import pandas as pd
tweets = pd.Series('sex {}'.format(x) for x in range(1000))
sex_tweets = tweets.str.contains('sex', case=False)
print sex_tweets.sum()
# 1000
You can get a TypeError in the first snippet if one of your elements is not of type string. This is more of join issue. A simple test can be done using the following snippet:
# tweets = pd.Series('sex {}'.format(x) for x in range(1000))
tweets = pd.Series(x for x in range(1000))
tweetstr = " ".join(tweets.tolist())
Which gives the following result:
Traceback (most recent call last):
File "F:\test.py", line 6, in <module>
tweetstr = " ".join(tweets.tolist())
TypeError: sequence item 0: expected string, numpy.int64 found
A simple workaround is to convert x in the list comprehension into a string before using join, like so:
tweets = pd.Series(str(x) for x in range(1000))
Or you can be more explicit and create a list first, map the str function to it, and then use join.
tweetlist = tweets.tolist()
tweetstr = map(str, tweetlist)
tweetstr = " ".join(tweetstr)
The CSV conversion is not the problem! When you use str() on a column of a DataFrame (that is, a Series), it makes a "print-friendly" output of the Series, which means cutting out the majority of the data, and just displaying the first few and the last few. Here is a transcript of an IPython session that will probably illustrate the issue better:
In [1]: import pandas as pd
In [2]: blah = pd.Series('tweet %d' % n for n in range(1000))
In [3]: blah
Out[3]:
0 tweet 0
1 tweet 1
... (output continues from 1 to 29)
29 tweet 29
... (OUTPUT SKIPS HERE)
970 tweet 970
... (output continues from 970 to 998)
998 tweet 998
999 tweet 999
dtype: object
In [4]: blahstr = str(blah)
In [5]: blahstr.count('tweet')
Out[5]: 60
So, since the output of the str() operation cuts off my data (and might even truncate column values, If I had used longer strings), I don't get 1000, I get 60.
If you want to do it your way (combine everything back into a single string and work with it that way), there's no point in using a library like Pandas. Pandas gives you better ways:
Working With a Series of Strings
Pandas has tools for working with a Series that contains strings. Here is a tutorial-like page about it, and here is the full string handling API documentation. In particular, for finding the number of uses of the word "sex", you could do something like this (assuming df is a DataFrame, and text is the column containing the tweets):
import re
counts = df['text'].str.count('sex', re.IGNORECASE)
counts should be a Series containing the number of occurrences of "sex" in each tweet. counts.sum() would give you the total number of usages, which should hopefully be more than 1000.