I am trying to make a program that shows me the data of two specific coins. What it basically does is to takes the data in an infinite "for loop" to display the info until I close the program.
And now I am trying to get the last two elements of this infinite for loop every time it runs again and make calculations with it. I know I can't just hold all the items in a list and I am not sure how to store last two's and use them every time.
for line in lines:
coinsq = line.strip()
url = priceKey + coinsq + "USDT"
data = requests.get(url)
datax = data.json()
print( datax['symbol'] + " " + datax['price'])
Store the data in a deque (from the collections module).
Initialise your deque like this:
from collections import deque
d = deque([], 2)
Now you can append to d as many times as you like and it will only ever have the most recent two entries.
So, for example:
d.append('a')
d.append('b')
d.append('c')
for e in d:
print(e)
Will give the output:
b
c
Adapting your code to use this technique should be trivial.
I recommend this approach in favour of using two variables because it's easier to change if you (for some reason) decided that you want the last N values because all you need to do is change the deque constructor
You can just use two variables that you update for each new elements, at the end you will just have the two last elements seen :
pre_last = None
last = None
for line in lines:
coinsq = line.strip()
url = priceKey + coinsq + "USDT"
data = requests.get(url)
datax = data.json()
print( datax['symbol'] + " " + datax['price'])
pre_last = last
last = datax
#Do the required calculations with last and pre_last
(And just to be exact this isn't an infinite loop otherwise there wouldn't be a 'last' element)
As your script does not have prior information of when the execution is going to halt, I suggest to define a queue-like structure. In each iteration, you update your last item and your previous-to-last. In that way, you just have to keep in memory two elements. I don't know how were you planning on accessing those two elements when the execution has finished, but you should be able to access that queue when the execution is over.
Sorry for not providing code, but this can be done in many ways, I supposed it was better to suggest you a way of proceeding.
You can define a variable for the second-last element of the for loop, and use the datax variable that's already defined in the loop as the last element:
sec_last = None
datax = None
for line in lines:
sec_last = datax
coinsq = line.strip()
url = priceKey + coinsq + "USDT"
data = requests.get(url)
datax = data.json()
print( datax['symbol'] + " " + datax['price'])
print(f"Last element", datax)
print(f"Second Last element", sec_last)
Related
I am trying to build a list of possible string combinations to then iterate against it. I am running out of memory executing the below line, which I get because it's several billion lines.
data = list(map(''.join,chain.from_iterable(product(string.digits+string.ascii_lowercase+'/',repeat = i) for i in range(0,7))))
So I think, rather than creating this massive iterable list, I create it and execute against it in waves with some kind of "holding string" that I save to memory and can restart from when I want. IE, generate and iterate against a million rows, then save the holding string to file. Then start up again with the next million rows, but start my mapping/iterations at the "holding string" or the next row. I have no clue how to do that. I think I might have to not use the .from_iterable(product( code that I had implemented. If that idea is not clear (or is clear but stupid) let me know.
Also, another option rather than breaking up the memory issue, would be to somehow optimize the iterable list itself, I'm not sure how I would do that either. I'm trying to map an API that has no existing documentation. While I don't know that a non-exhaustive list is the route to take, I'm certainly open to suggestions.
Here is the code chunk I've been using:
import csv
import string
from itertools import product, chain
#Open stringfile. If it doesn't exist, create it
try:
with open(stringfile) as f:
reader = csv.reader(f,delimiter=',')
data = list(reader)
f.close()
except:
data = list(map(''.join, chain.from_iterable(product(string.digits+string.ascii_lowercase + '/', repeat = i) for i in range(0,6))))
f=open(stringfile,'w')
f.write(str('\n.join(data)))
f.close()
pass
#Iterate against
...
EDIT: Further poking at this led me to this thread, which is similar topic. There is discussion about using islice, which helps me post-mapping (the script crashed last night while doing the API calls due to an error with my exception handling). I just restarted it at the 400k-th iterable.
Can I use .islice within a product? So for the generator, generate items 10mil-12mil (for example) and operate on just those items as a way to preserve memory?
Here is the most recent snippet of what I'm doing. You can see I plugged in the islice further down in the actual iteration, but I want to islice in the actual generation (the data = line).
#Open stringfile. If it doesn't exist, create it
try:
with open(stringfile) as f:
reader = csv.reader(f,delimiter=',')
data = list(reader)
f.close()
except:
data = list(map(''.join, chain.from_iterable(product(string.digits + string.ascii_lowercase + '/',repeat = i) for i in range(3,5))))
f=open(stringfile,'w')
f.write(str('\n'.join(data)))
f.close()
pass
print("Total items: " + str(len(data)-substart))
fdf = pd.DataFrame()
sdf = pd.DataFrame()
qdf = pd.DataFrame()
attctr = 0
#Iterate through the string combination list
for idx,kw in islice(enumerate(data),substart,substop):
#Attempt API call. Do the cooldown function if there is an issue.
if idx/1000 == int(idx/1000):
print("Iteration " + str(idx) + " of " + str(len(data)))
attctr +=1
if attctr == attcd:
print("Cooling down!")
time.sleep(cdtimer)
attctr = 0
try:
....
The code I am running so far is as follows
import os
import math
import statistics
def main ():
infile = open('USPopulation.txt', 'r')
values = infile.read()
infile.close()
index = 0
while index < len(values):
values(index) = int(values(index))
index += 1
print(values)
main()
The text file contains 41 rows of numbers each entered on a single line like so:
151868
153982
156393
158956
161884
165069
168088
etc.
My tasks is to create a program which shows average change in population during the time period. The year with the greatest increase in population during the time period. The year with the smallest increase in population (from the previous year) during the time period.
The code will print each of the text files entries on a single line, but upon trying to convert to int for use with the statistics package I am getting the following error:
values(index) = int(values(index))
SyntaxError: can't assign to function call
The values(index) = int(values(index)) line was taken from reading as well as resources on stack overflow.
You can change values = infile.read() to values = list(infile.read())
and it will have it ouput as a list instead of a string.
One of the things that tends to happen whenever reading a file like this is, at the end of every line there is an invisible '\n' that declares a new line within the text file, so an easy way to split it by lines and turn them into integers would be, instead of using values = list(infile.read()) you could use values = values.split('\n') which splits the based off of lines, as long as values was previously declared.
and the while loop that you have can be easily replace with a for loop, where you would use len(values) as the end.
the values(index) = int(values(index)) part is a decent way to do it in a while loop, but whenever in a for loop, you can use values[i] = int(values[i]) to turn them into integers, and then values becomes a list of integers.
How I would personally set it up would be :
import os
import math
import statistics
def main ():
infile = open('USPopulation.txt', 'r')
values = infile.read()
infile.close()
values = values.split('\n') # Splits based off of lines
for i in range(0, len(values)) : # loops the length of values and turns each part of values into integers
values[i] = int(values[i])
changes = []
# Use a for loop to get the changes between each number.
for i in range(0, len(values)-1) : # you put the -1 because there would be an indexing error if you tried to count i+1 while at len(values)
changes.append(values[i+1] - values[i]) # This will get the difference between the current and the next.
print('The max change :', max(changes), 'The minimal change :', min(changes))
#And since there is a 'change' for each element of values, meaning if you print both changes and values, you would get the same number of items.
print('A change of :', max(changes), 'Happened at', values[changes.index(max(changes))]) # changes.index(max(changes)) gets the position of the highest number in changes, and finds what population has the same index (position) as it.
print('A change of :', min(changes), 'Happened at', values[changes.index(min(changes))]) #pretty much the same as above just with minimum
# If you wanted to print the second number, you would do values[changes.index(min(changes)) + 1]
main()
If you need any clarification on anything I did in the code, just ask.
I personally would use numpy for reading a text file.
in your case I would do it like this:
import numpy as np
def main ():
infile = np.loadtxt('USPopulation.txt')
maxpop = np.argmax(infile)
minpop = np.argmin(infile)
print(f'maximum population = {maxpop} and minimum population = {minpop}')
main()
This might be a tough question and I will do my best to explain the best I can!
Im trying to create a script where I run different json files through a forloop (Each of these json files has their own "data") - What I want to do is that I want to be able to add those values into a list where I later match the first object in the json and matches if it is found in the script. If it is found with the same Name then we gonna check if its last object is higher or lower than it was previous in that object name. If it is higher then we print that it has been higher and we change the value object to the new one (Which we change inside a list).
Also another thing that I want it to do is that It should only append to the list once and not n times depending on how many json files I use.
I start by showing of the json that I use which we can see it contains same Name but different number.
{
"name": "Albert",
"image": "https://pbs.twimg.com/profile_images/....jpg",
"number": "5"
}
-----------------------------------
{
"name": "Albert",
"image": "https://pbs.twimg.com/profile_images/....jpg",
"number": "6"
}
Ok before I continue to explain. Here is the code I wrote so I can explain it better...
webhook_list = [
'https://discordapp.com/api/webhooks/5133124/slack',
'https://discordapp.com/api/webhooks/5124124/slack',
'https://discordapp.com/api/webhooks/5112412/slack'] #Discord different API_key/Webhook
def get_identifier(thread):
thread_id = thread['name'] #Grab name from Json
try:
thread_image = thread['image'] #Grab Image from Json
except KeyError:
thread_image = None
try:
thread_number = thread['numbers'] #Grab Image from Json
except KeyError:
thread_number = None
identifier = ('{}%{}%{}').format(thread_id, thread_image, thread_number) #Make them all into one "String"
return identifier
def script():
old_list = [] #Old_list where we gonna append if new items arrives.
while True:
for thread in [line.rstrip('\n') for line in open('names.txt')]: #We check all names in txt file (This can be changed to however you want to print, could be a list aswell. Doesn't matter
get_value_identifier = get_identifier(thread) #We send the value thread to get_identifier to grab the values which return of the identifier
if get_identifier(thread) not in old_list: #if this value is not in the old list then we go in here
#Slack/Discord function
directory = os.fsencode('./slack')
for counters, file in enumerate(os.listdir(directory)):
filename = os.fsdecode(file)
if filename.endswith(".json"):
with open('./slack/' + filename) as slackAttachment:
data = json.loads(slackAttachment.read())
data_list = []
# *****************---Picture---*****************
try:
data["attachments"][0]["thumb_url"] = information ['image_url'] #We add everything to data so we can later on be able to print it out to discord/slack
except Exception:
data["attachments"][0][
"thumb_url"] = 'https://cdn.browshot.com/static/images/not-found.png'
# *****************---Footer---*****************
data["attachments"][0]["footer"] = str(
data["attachments"][0]["footer"] + ' | ' + datetime.now().strftime(
'%Y-%m-%d [%H:%M:%S.%f')[:-3] + "]")
# -------------------------------------------------------------------------
a = get_value_identifier.split("%") #We split the identifier meaning it will be name image number
for i, items in zip(range(len(old_list)), old_list): #We split the length of old_list length and the old_list (I didn't think out of anything other way than this. This can be changed)
old_list_value = old_list[i].split("%") #We also split the old_list values the same way as we did with *a =...*
if a[0] in old_list_value[0]: #if the first value of both *a* in inside somewhere in a old_list first value.....
if old_list_value[2] < a[2]: #We check if the number is higher than the old_list. If it is then we do the things below
data["attachments"][0]["title"] = information['name'].upper()
data_list.append((webhook_list[counters], data))
for hook, data in data_list:
threading.Thread(target=sendData, args=(hook, data)).start()
old_list[i] = get_value_identifier
break
elif len(old_list_value[2]) >= len(a[2]): #We check if the number is lower than the old_list. If it is then we do the things below
old_list[i] = get_value_identifier
break
else: #If nothing is found then we just do things below here and add the value to old_list.
data["attachments"][0]["title"] = information['name'].upper()
data_list.append((webhook_list[counters], data))
for hook, data in data_list:
threading.Thread(target=sendData, args=(hook, data)).start()
old_list.append(get_value_identifier)
else:
randomtime = random.randint(3, 7)
logger.warn('No new item found! - retrying in %d secs' % (randomtime))
time.sleep(randomtime)
As you can see, this is my code that I do for opening each json format and we use data = json.loads(slackAttachment.read()) meaning it will add up into "data" which is a json.
directory = os.fsencode('./slack')
for counters, file in enumerate(os.listdir(directory)):
filename = os.fsdecode(file)
if filename.endswith(".json"):
with open('./slack/' + filename) as slackAttachment:
data = json.loads(slackAttachment.read())
Whenever one of the loop is done - It adds up everything into data and later on at the end we can print out the data or send it through a requests to Discord/Slack.
But in the end before I want to print it out to discord/slack - I am checking first of all if the name is already in the list. We do that by splitting each % which will contains "Name image Number"
In the if statement we check if a[0] (Which is current thread found) - if it is somewhere in a old list.
IF it is in the old list then we check the last number on the object
if it is higher or lower
IF it is higher then we print it out since its a increase of value
and then we change the previous old_list value to this new one.
IF it is lower then we just change the previous old_list value to the
new one.
IF there is none in the old_list that matches then we just append it
to the list.
.
a = get_value_identifier.split("%")
for i, items in zip(range(len(old_list)), old_list):
old_list_value = old_list[i].split("%")
if a[0] in old_list_value[0]:
if old_list_value[2] < a[2]:
data["attachments"][0]["title"] = information['name'].upper()
data_list.append((webhook_list[counters], data))
for hook, data in data_list:
threading.Thread(target=sendData, args=(hook, data)).start()
old_list[i] = get_value_identifier
break
elif len(old_list_value[2]) > len(a[2]):
old_list[i] = get_value_identifier
break
else:
data["attachments"][0]["title"] = information['name'].upper()
data_list.append((webhook_list[counters], data))
for hook, data in data_list:
threading.Thread(target=sendData, args=(hook, data)).start()
old_list.append(get_value_identifier)
and here is the mechanic issue.
The issue I am having is that at the beginning when we run
directory = os.fsencode('./slack')
for counters, file in enumerate(os.listdir(directory)):
filename = os.fsdecode(file)
if filename.endswith(".json"):
with open('./slack/' + filename) as slackAttachment:
data = json.loads(slackAttachment.read())
is that it will be looping through the function above
a = get_value_identifier.split("%")
for i, items in zip(range(len(old_list)), old_list):
old_list_value = old_list[i].split("%")......
x times depending on how many files there is in the slack. This will be a issue after second loop becuase if the first loop find a new item. It will add it into the old_list using append but that means when the second loop with slack/discord it means that there will be a value in the old_list that contains it. And here is where it start continuing... Meaning the first loop will always be correct but after that it will give you not the correct answers.
My question in that case is two question
How can I make so whenever the first loop hits etc if statement or elif or else. It should print the same for all discord/slack at once .
If it hits if or elif statement. Inside those I have that it should take the value from old_list[i] and change that value to the "newer" one which is get_value_identifier. How can I make it so it does it only once. Because I believe if I run through 3 slacks in my case. It will be 3 same values in the old_list due to the for loop.
I think that is it for me and I hope I explained so good I could! Please. If there is any more question to be added or anything. I will be pretty active now and I will be able to edit to either upgrade the question due to your questions below!
(Code below)
I'm scraping a website and the data I'm getting back is in 2 multi-dimensional arrays. I'm wanting everything to be in a JSON format because I want to save this and load it in again later when I add "tags".
So, less vague. I'm writing a program which takes in data like what characters you have and what missions are requiring you to do (you can complete multiple at once if the attributes align), and then checks that against a list of attributes that each character fulfills and returns a sorted list of the best characters for the context.
Right now I'm only scraping character data but I've already "got" the attribute data per character - the problem there was that it wasn't sorted by name so it was just a randomly repeating list that I needed to be able to look up. I still haven't quite figured out how to do that one.
Right now I have 2 arrays, 1 for the headers of the table and one for the rows of the table. The rows contain the "Answers" for the Header's "Questions" / "Titles" ; ie Maximum Level, 50
This is true for everything but the first entry which is the Name, Pronunciation (and I just want to store the name of course).
So:
Iterations = 0
While loop based on RowArray length / 9 (While Iterations <= that)
HeaderArray[0] gives me the name
RowArray[Iterations + 1] gives me data type 2
RowArray[Iterations + 2] gives me data type 3
Repeat until Array[Iterations + 8]
Iterations +=9
So I'm going through and appending these to separate lists - single arrays like CharName[] and CharMaxLevel[] and so on.
But I'm actually not sure if that's going to make this easier or not? Because my end goal here is to send "CharacterName" and get stuff back based on that AND be able to send in "DesiredTraits" and get "CharacterNames who fit that trait" back. Which means I also need to figure out how to store that category data semi-efficiently. There's over 80 possible categories and most only fit into about 10. I don't know how I'm going to store or load that data.
I'm assuming JSON is the best way? And I'm trying to keep it all in one file for performance and code readability reasons - don't want a file for each character.
CODE: (Forgive me, I've never scraped anything before + I'm actually somewhat new to Python - just got it 4? days ago)
https://pastebin.com/yh3Z535h
^ In the event anyone wants to run this and this somehow makes it easier to grab the raw code (:
import time
import requests, bs4, re
from urllib.parse import urljoin
import json
import os
target_dir = r"D:\00Coding\Js\WebScraper" #Yes, I do know that storing this in my Javascript folder is filthy
fullname = os.path.join(target_dir,'TsumData.txt')
StartURL = 'http://disneytsumtsum.wikia.com/wiki/Skill_Upgrade_Chart'
URLPrefix = 'http://disneytsumtsum.wikia.com'
def make_soup(url):
r = requests.get(url)
soup = bs4.BeautifulSoup(r.text, 'lxml')
return soup
def get_links(url):
soup = make_soup(url)
a_tags = soup.find_all('a', href=re.compile(r"^/wiki/"))
links = [urljoin(URLPrefix, a['href'])for a in a_tags] # convert relative url to absolute url
return links
def get_tds(link):
soup = make_soup(link)
#tds = soup.find_all('li', class_="category normal") #This will give me the attributes / tags of each character
tds = soup.find_all('table', class_="wikia-infobox")
RowArray = []
HeaderArray = []
if tds:
for td in tds:
#print(td.text.strip()) #This is everything
rows = td.findChildren('tr')#[0]
headers = td.findChildren('th')#[0]
for row in rows:
cells = row.findChildren('td')
for cell in cells:
cell_content = cell.getText()
clean_content = re.sub( '\s+', ' ', cell_content).strip()
if clean_content:
RowArray.append(clean_content)
for row in rows:
cells = row.findChildren('th')
for cell in cells:
cell_content = cell.getText()
clean_content = re.sub( '\s+', ' ', cell_content).strip()
if clean_content:
HeaderArray.append(clean_content)
print(HeaderArray)
print(RowArray)
return(RowArray, HeaderArray)
#Output = json.dumps([dict(zip(RowArray, row_2)) for row_2 in HeaderArray], indent=1)
#print(json.dumps([dict(zip(RowArray, row_2)) for row_2 in HeaderArray], indent=1))
#TempFile = open(fullname, 'w') #Read only, Write Only, Append
#TempFile.write("EHLLO")
#TempFile.close()
#print(td.tbody.Series)
#print(td.tbody[Series])
#print(td.tbody["Series"])
#print(td.data-name)
#time.sleep(1)
if __name__ == '__main__':
links = get_links(StartURL)
MainHeaderArray = []
MainRowArray = []
MaxIterations = 60
Iterations = 0
for link in links: #Specifically I'll need to return and append the arrays here because they're being cleared repeatedly.
#print("Getting tds calling")
if Iterations > 38: #There are this many webpages it'll first look at that don't have the data I need
TempRA, TempHA = get_tds(link)
MainHeaderArray.append(TempHA)
MainRowArray.append(TempRA)
MaxIterations -= 1
Iterations += 1
#print(MaxIterations)
if MaxIterations <= 0: #I don't want to scrape the entire website for a prototype
break
#print("This is the end ??")
#time.sleep(3)
#jsonized = map(lambda item: {'Name':item[0], 'Series':item[1]}, zip())
print(MainHeaderArray)
#time.sleep(2.5)
#print(MainRowArray)
#time.sleep(2.5)
#print(zip())
TsumName = []
TsumSeries = []
TsumBoxType = []
TsumSkillDescription = []
TsumFullCharge = []
TsumMinScore = []
TsumScoreIncreasePerLevel = []
TsumMaxScore = []
TsumFullUpgrade = []
Iterations = 0
MaxIterations = len(MainRowArray)
while Iterations <= MaxIterations: #This will fire 1 time per Tsum
print(Iterations)
print(MainHeaderArray[Iterations][0]) #Holy this gives us Mickey ;
print(MainHeaderArray[Iterations+1][0])
print(MainHeaderArray[Iterations+2][0])
print(MainHeaderArray[Iterations+3][0])
TsumName.append(MainHeaderArray[Iterations][0])
print(MainRowArray[Iterations][1])
#At this point it will, of course, crash - that's because I only just realized I needed to append AND I just realized that everything
#Isn't stored in a list as I thought, but rather a multi-dimensional array (as you can see below I didn't know this)
TsumSeries[Iterations] = MainRowArray[Iterations+1]
TsumBoxType[Iterations] = MainRowArray[Iterations+2]
TsumSkillDescription[Iterations] = MainRowArray[Iterations+3]
TsumFullCharge[Iterations] = MainRowArray[Iterations+4]
TsumMinScore[Iterations] = MainRowArray[Iterations+5]
TsumScoreIncreasePerLevel[Iterations] = MainRowArray[Iterations+6]
TsumMaxScore[Iterations] = MainRowArray[Iterations+7]
TsumFullUpgrade[Iterations] = MainRowArray[Iterations+8]
Iterations += 9
print(Iterations)
print("It's Over")
time.sleep(3)
print(TsumName)
print(TsumSkillDescription)
Edit:
tl;dr my goal here is to be like
"For this Mission Card I need a Blue Tsum with high score potential, a Monster's Inc Tsum for a bunch of games, and a Male Tsum for a long chain.. what's the best Tsum given those?" and it'll be like "SULLY!" and automatically select it or at the very least give you a list of Tsums. Like "These ones match all of them, these ones match 2, and these match 1"
Edit 2:
Here's the command Line Output for the code above:
https://pastebin.com/vpRsX8ni
Edit 3: Alright, just got back for a short break. With some minor looking over I see what happened - my append code is saying "Append this list to the array" meaning I've got a list of lists for both the Header and Row arrays that I'm storing. So I can confirm (for myself at least) that these aren't nested lists per se but they are definitely 2 lists, each containing a single list at every entry. Definitely not a dictionary or anything "special case" at least. This should help me quickly find an answer now that I'm not throwing "multi-dimensional list" around my google searches or wondering why the list stuff isn't working (as it's expecting 1 value and gets a list instead).
Edit 4:
I need to simply add another list! But super nested.
It'll just store the categories that the Tsum has as a string.
so Array[10] = ArrayOfCategories[Tsum] (which contains every attribute in string form that the Tsum has)
So that'll be ie TsumArray[10] = ["Black", "White Gloves", "Mickey & Friends"]
And then I can just use the "Switch" that I've already made in order to check them. Possibly. Not feeling too well and haven't gotten that far yet.
Just use the with open file as json_file , write/read (super easy).
Ultimately stored 3 json files. No big deal. Much easier than appending into one big file.
In the code below, ids is an array which contains the steam64 ids of all users in your friendslist. Now according to the steam web api documentation, GetPlayerSummaries only takes a list of 100 comma separated steam64 ids. Some users have more than 100 friends, and instead of running a for loop 200 times that each time calls the API, I want to take array in sets of 100 steam ids. What would be the most efficient way to do this (in terms of speed)?
I know that I can do ids[0:100] to grab the first 100 elements of an array, but how I accomplish doing this for a friendlist of say 230 users?
def getDescriptions(ids):
sids = ','.join(map(str, ids))
r = requests.get('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='+API_KEY+'&steamids=' + sids)
data = r.json();
...
Utilizing the code from this answer, you are able to break this into groups of 100 (or less for the last loop) of friends.
def chunkit(lst, n):
newn = int(len(lst)/n)
for i in xrange(0, n-1):
yield lst[i*newn:i*newn+newn]
yield lst[n*newn-newn:]
def getDescriptions(ids):
friends = chunkit(ids, 3)
while (True):
try:
fids = friends.next()
sids = ','.join(map(str, fids))
r = requests.get('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='+API_KEY+'&steamids=' + sids)
data = r.json()
# Do something with the data variable
except StopIteration:
break
This will create iterators broken into 3 (second parameter to chunkit) groups. I chose 3, because the base size of the friends list is 250. You can get more (rules from this post), but it is a safe place to start. You can fine tune that value as you need.
Utilizing this method, your data value will be overwritten each loop. Make sure you do something with it at the place indicated.
I have an easy alternative, just reduce your list size on each while/loop until exhaustion:
def getDescriptions(ids):
sids = ','.join(map(str, ids))
sids_queue = sids.split(',')
data = []
while len(sids_queue) != 0:
r = requests.get('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='+ \
API_KEY+'&steamids=' + ','.join(sids_queue[:100])
data.append(r.json) # r.json without (), by the way
# then skip [0:100] and reassign to sids_queue, you get the idea
sids_queue = sids_queue[101:]