i will use a while loop for a refresh for a method.
def usagePerUserApi():
while True:
url = ....
resp = requests.get(url, headers=headers, verify=False)
data = json.loads(resp.content)
code = resp.status_code
Verbindungscheck.ausgabeVerbindungsCode(code)
head =.....
table = []
for item in (data['data']):
if item['un'] == tecNo:
table.append([
item['fud'],
item['un'],
str(item['lsn']),
str(item['fns']),
str(item['musage'])+"%",
str(item['hu']),
str(item['mu']),
str(item['hb']),
str(item['mb'])
])
print(tabulate(table,headers=head, tablefmt="github"))
time.sleep(300)
If I leave time.sleep like this, it will be displayed as an error. If I put it under the while loop, It will be updated constantly and does not wait 5 minutes.
I don't know where the mistake is. I hope you can help me.
You need to import the python time library
If you place
import time
at the top of your file it should work
Have you imported the time library? If not, then add
import time
to the top of your code, and it should work.
Also bear in mind that there may be problems with output buffering, where the program won't wait as expected, and so you'll need to turn it off, as shown by this answer.
Related
I want to make a script where there are a few functions, the first is the add_cart which by name attempts to add an item to cart using proper cookies/headers because I can get a response which I get that ["error"] to print the log which will say retrying cart but the script suddenly stops even if I put the add_cart() function on the bottom but I also want to use the datetime module to time.sleep(2) before running the add_cart() function so confused how I could get this all up and running, I attached images below with my code it currently gets a response because it's printing in the terminal but I want to find out what I said above. Thanks!
Image (All code with headers, cookies, and payload minimized)
https://i.imgur.com/2jGwAeA.png
Please inform if something is wrong or any way I can fix my formatting? Again all headers, cookies, payload, requests url and responses is right trying to fix my other errors though
this is the full code since the bot said add it:
import json
from datetime import datetime
import time
import os
cookies = {
}
headers = {
}
atcPayload = {
}
def add_cart(cookies, headers, atcPayload):
response = requests.post('apiurl', headers=headers, cookies=cookies, data=atcPayload)
data = response.json()
print("adding")
if data["error"] == 'true':
print("retrying cart")
#(cookies/stuff hidden because it's a private project but that isn't the issue anyways)
Seems to be another issue where it won't run now in the visual studio code terminal either :(
If I understand you problem correctly, you would like to implement a loop, where you call your function, then wait 2 seconds, and do it indefinetly.
*edit
I hope I get it right now :)
Modified the code based on your comments, now it wait until "ok" returns from your function.
import time
import sys
from datetime import datetime,timedelta
Start_Time = datetime.now()
# Your function
def add_cart():
# This is for make some timing response
global Start_Time
Return_Value = "retry cart"
Current_Time = datetime.now()
# Just print out the current time
print(Current_Time)
# Get 10 second delay on status change
if Current_Time>(Start_Time+timedelta(seconds=10)):
Return_Value = "ok"
return Return_Value
# The main function
def main():
# For the response from the function
Response = None
# This will make an infinite loop
while True:
# Call your function, it will return "rety cart" until 10 seconds not pass, then it returns "ok"
Response = add_cart()
print(Response)
# Wait 2 second
time.sleep(2)
# Check the response and break out from the while loop
if ("ok" == Response):
break
# This will run if you run your file, and not run if you import it (for later use)
if __name__ == "__main__":
try:
# Run the main function defined above
main()
# If you want to interrupt the script press CTRL+c and the below part will catch it
except KeyboardInterrupt:
print("Interrupted")
sys.exit(0)
I'm a fellow young programmer and I have a question about,
I have a code checking percentages on https://shadowpay.com/en?price_from=0.00&price_to=34.00&game=csgo&hot_deal=true
And I want to make it happen in real-time.
Questions:
Is there a way to make it check in real-time or is it just by refreshing the page?
if refreshing page:
How can I make it refresh the page, I saw older answers but they did not work for me because the answers only worked in their code.
(I tried to request get it every time the while loop happens, but it doesn't work, or should it?)
This is the code:
import json
import requests
import time
import plyer
import random
import copy
min_notidication_perc = 26; un = 0; us = ""; biggest_number = 0;
r = requests.get('https://api.shadowpay.com/api/market/get_items?types=[]&exteriors=[]&rarities=[]&collections=[]&item_subcategories=[]&float={"from":0,"to":1}&price_from=0.00&price_to=34.00&game=csgo&hot_deal=true&stickers=[]&count_stickers=[]&short_name=&search=&stack=false&sort=desc&sort_column=price_rate&limit=50&offset=0', timeout=3)
while True:
#Here is the place where I'm thinking of putting it
time.sleep(5); skin_list = [];perc_list = []
for i in range(len(r.json()["items"])):
perc_list.append(r.json()["items"][i]["discount"])
skin_list.append(r.json()["items"][i]["collection"]["name"])
skin = skin_list[perc_list.index(max(perc_list))]; print(skin)
biggest_number = int(max(perc_list))
if un != biggest_number or us != skin:
if int(max(perc_list)) >= min_notidication_perc:
plyer.notification.notify(
title=f'-{int(max(perc_list))}% ShadowPay',
message=f'{skin}',
app_icon="C:\\Users\\<user__name>\\Downloads\\Inipagi-Job-Seeker-Target.ico",
timeout=120,
)
else:
pass
else:
pass
us = skin;un = biggest_number
print(f'id: {random.randint(1, 99999999)}')
print(f'-{int(max(perc_list))}% discount\n')
When using requests.get() you are retrieving the page source of that link then closing it. As you are waiting on the response you don't need the time.sleep(5) line as that is handled by requests.
In order to get the real-time value you'll have to call the page again, this is where you can use time.sleep() so as not to abuse the api.
I have problems achieving the following in Python. I am making API requests in a for loop, and would like to implement a status code check and a pause based on the status code, to make sure that the code will run without errors. So for example, I am after something like:
for i in X:
url = 'abc' + i
r = requests.get(url)
while r.status_code = 123:
sleep(1)
r = requests.get(url)
<the code I want to run that uses r>
How can I achieve this? Thanks in advance :)
I've seen a few instances of this question, but I was not sure how to apply the changes to my particular situation. I have code that monitors a webpage for changes and refreshes every 30 seconds, as follows:
import sys
import ctypes
from time import sleep
from Checker import Checker
USERNAME = sys.argv[1]
PASSWORD = sys.argv[2]
def main():
crawler = Checker()
crawler.login(USERNAME, PASSWORD)
crawler.click_data()
crawler.view_page()
while crawler.check_page():
crawler.wait_for_table()
crawler.refresh()
ctypes.windll.user32.MessageBoxW(0, "A change has been made!", "Attention", 1)
if __name__ == "__main__":
main()
The problem is that Selenium will always show an error stating it is unable to locate the element after the first refresh has been made. The element in question, I suspect, is a table from which I retrieve data using the following function:
def get_data_cells(self):
contents = []
table_id = "table.datadisplaytable:nth-child(4)"
table = self.driver.find_element(By.CSS_SELECTOR, table_id)
cells = table.find_elements_by_tag_name('td')
for cell in cells:
contents.append(cell.text)
return contents
I can't tell if the issue is in the above function or in the main(). What's an easy way to get Selenium to refresh the page without returning such an error?
Update:
I've added a wait function and adjusted the main() function accordinly:
def wait_for_table(self):
table_selector = "table.datadisplaytable:nth-child(4)"
delay = 60
try:
wait = ui.WebDriverWait(self.driver, delay)
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, table_selector)))
except TimeoutError:
print("Operation timeout! The requested element never loaded.")
Since the same error is still occurring, either my timing function is not working properly or it is not a timing issue.
I've run into the same issue while doing web scraping before and found that re-sending the GET request (instead of refreshing) seemed to eliminate it.
It's not very elegant, but it worked for me.
I appear to have fixed my own problem.
My refresh() function was written as follows:
def refresh():
self.driver.refresh()
All I did was switch frames right after the refresh() call. That is:
def refresh():
self.driver.refresh()
self.driver.switch_to.frame("content")
This took care of it. I can see that the page is now refreshing without issues.
I am trying to create a script that allows me to send a GET request to every link in a text file at once. I am sure I could do this with threading but maybe you guys have a better suggestion. So far all it does is read each line one by one and send the request one by one.
import urllib2
def send(first,last):
with open("urls.txt", 'r') as urls:
for url in urls:
url = url.rstrip("\n")
print url
urllib2.urlopen(url+"?f_name="+first+"&last_name="+last)
if __name__ == "__main__":
first = raw_input("First Name: ")
last = raw_input("Last Name: ")
flood(first, last)
Check out the requests's async. It's got its own package now, but you could use that. It runs of gevent and greenlet. https://github.com/kennethreitz/grequests
Never mind, threading is the best way to go, I figured it out.