i got this situation:
datelist = pd.date_range(dateFrom, dateTo, dateperiods)
while i < len(datelist):
Date=datelist[i].floor('D')
print(f'{Date} STARTED')
if i!=0:
Date2=datelist[i-1].floor('D')
else:
Date2=Date
i = i + 1
try:
**COMPLEX AND LONG CODE THAT USE DATE AS A PARM**
except Exception as inst:
print(inst)
print(f'--------elaboration for {Date} failed. stated elaboration
for next date--------')
if i < len(datelist):
Date=datelist[i].floor('D')
print(f'--{Date} STARTED')
globalsdatelist.append(Date)
Date2=datelist[i-1].floor('D')
i = i + 1
else:
print('request failed')
break
**COMPLEX AND LONG CODE THAT USE DATE AS A PARM**
what i want is to continue the except while i<len(datelist) not just the first time an error occurred.
there is an easy way to do it?
thank you so much
Try this:
datelist = pd.date_range(dateFrom, dateTo, dateperiods)
while i < len(datelist):
try:
**COMPLEX AND LONG CODE THAT USE DATE AS A PARM**
except Exception as e:
print(e)
#handle exception case appropriately
Related
I'm trying the following:
try:
scroll_token = response["scroll_token"]
except ValueError as e:
logging.info(f'could not find "scroll_token" {e}')
try:
scroll_token = response["scroll_id"]
except ValueError as e:
logging.info(f'could not find "scroll_id" {e}')
Pretty much if the response doesn't have "scroll_token" in the response then I want it to see if there is "scroll_id" in the response instead. but for some reason this isn't working, it just keeps failing at the 1st try case and says:
scroll_token = response["scroll_token"] KeyError: 'scroll_token'
You are catching the wrong exception; it's KeyError that you are expecting could be raised.
try:
scroll_token = response["scroll_token"]
except KeyError as e:
logging.info(f'could not find "scroll_token" {e}')
try:
scroll_token = response["scroll_id"]
except KeyError as e:
logging.info(f'could not find "scroll_id" {e}')
You can write this more simply with a loop.
for k in ["scroll_token", "scroll_id"]:
try:
scroll_token = response[k]
break
except KeyError:
logging.info("could not find %s", k)
else:
# What to do if neither key is found?
...
The error message is not print
What's the problem? ㅠㅠㅠㅠ
uid = id
upw = pw
try:
driver.find_element_by_xpath('//*[#id="userID"]').send_keys(uid)
action.reset_actions()
driver.find_element_by_xpath('//*[#id="userPWD"]').send_keys(upw)
driver.find_element_by_xpath('//*[#id="btnLogin"]').click()
except Exception as e:
print("{} 계정 로그인이 실패하였습니다.".format(uid))
In order to print the exception, you need to actually print e, not uid:
.....
except Exception as e:
print("{} 계정 로그인이 실패하였습니다.".format(uid))
print(e)
For loop only iterating one object from the list provided by the function. Below are the code and terminal logs.
Note:- I want to delete both the URLs which are the part of below list
function delete_index_url() output is like :-
['https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13', 'https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.16']
def clean_index( ):
delete_urls = delete_index_url() # above function output assign to variable
for i in delete_urls:
print(i) <-- this only print "https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13"
try:
req = requests.delete(i)
except requests.exceptions.ConnectionError as e:
print ('ERROR: Not able to connect to URL')
return 0
except requests.exceptions.Timeout as e:
print ('ERROR: ElasticSearch time out')
return 0
except requests.exceptions.HTTPError as e:
print ('ERROR: HTTP error')
return 0
else:
print ('INFO: ElasticSearch response status code was %s' % req.status_code)
if req.status_code != 200:
return 0
else:
return 1
print(clean_index())
Logs output from a python script:-
INFO: Sorting indexes
['https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13', 'https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.16']
INFO: Getting a list of indexes
INFO: ElasticSearch response status code was 200
INFO: Found 200 indexes
INFO: Sorting indexes
https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13 # only 2019.09.13, not 2019.09.16 logs URLs
Instead of returning 0 or 1 and ending the function right away, you can create a list and store the responses and return it:
def clean_index( ):
responses = []
delete_urls = delete_index_url() # above function output assign to variable
for i in delete_urls:
print(i)
try:
req = requests.delete(i)
except requests.exceptions.ConnectionError as e:
print ('ERROR: Not able to connect to URL')
responses.append(0)
except requests.exceptions.Timeout as e:
print ('ERROR: ElasticSearch time out')
responses.append(0)
except requests.exceptions.HTTPError as e:
print ('ERROR: HTTP error')
responses.append(0)
else:
print ('INFO: ElasticSearch response status code was %s' % req.status_code)
if req.status_code != 200:
responses.append(0)
else:
responses.append(1)
return responses
print(clean_index())
I am trying to create a bridge between Metatrader 4 and Darwinex ZeroMQ (Python). I got the connection 100% working - returning values. The problem is the values are returned as 'NoneType', looks like a dictionary but it is not iterable. Does anybory knows how can I assign the information to a variable?
I am new in python and I am trying to create a small robot.
Follow the link for for the Darwinex docs: https://github.com/darwinex/dwx-zeromq-connector
See below my Python code and the returned values:
from DWX_ZeroMQ_Connector_v2_0_1_RC8 import DWX_ZeroMQ_Connector
_zmq = DWX_ZeroMQ_Connector(_verbose=True)
_zmq._generate_default_order_dict()
_zmq._DWX_MTX_GET_ALL_OPEN_TRADES_()
_zmq._DWX_MTX_GET_ALL_OPEN_TRADES_().get('_trades')
Follow below an screenshot on Jupiter notebook, easier to see the results:
i use my own Sockets for communication. Here some edited Code Examples:
import zmq
import threading
import time
import ast
def fetch_to_db(data):
try:
if 'OPEN_TRADES' in data:
res = ast.literal_eval(data)
orders = res['_trades']
for key,value in orders.items():
#print(key)
value['orderID'] = key
print(value)
print()
else:
print('FETCH ERROR')
except Exception as e:
print(e)
def get_open_trades(stop):
try:
while True:
data = str('TRADE;GET_OPEN_TRADES')
c = zmq.Context()
s = c.socket(zmq.PUSH)
s.connect('tcp://127.0.0.1:32768')
s.send_string(data)
s.send_string(data)
time.sleep(1)
if stop():
s.close()
break
except Exception as e:
print(e)
def receiver_sock(stop):
try:
c = zmq.Context()
s = c.socket(zmq.PULL)
s.setsockopt(zmq.RCVHWM, 1)
s.connect('tcp://127.0.0.1:32769')
while True:
data = s.recv_string()
fetch_to_db(data)
time.sleep(0.00001)
if stop():
s.close()
break
except Exception as e:
print(e)
def loop_s():
try:
stop_threads = False
receiver_socket = threading.Thread(target = receiver_sock, args =(lambda : stop_threads, ))
receiver_socket.setDaemon(True)
receiver_socket.start()
open_trades = threading.Thread(target = get_open_trades, args =(lambda : stop_threads, ))
open_trades.setDaemon(True)
open_trades.start()
except Exception as e:
print(e)
try:
loop_s()
while True: time.sleep(100)
except KeyboardInterrupt:
print('CLOSING')
processes = [get_open_trades,receiver_sock]
for i in processes:
stop_threads = True
t1 = threading.Thread(target = i, args =(lambda : stop_threads, ))
the fetch_to_db function return a dict for every order including orderID
regards
I'm using python 3.3.0 in Windows 7.
I have two files: dork.txt and fuzz.py
dork.txt contains following:
/about.php?id=1
/en/company/news/full.php?Id=232
/music.php?title=11
fuzz.py contains following:
srcurl = "ANY-WEBSITE"
drkfuz = open("dorks.txt", "r").readlines()
print("\n[+] Number of dork names to be fuzzed:",len(drkfuz))
for dorks in drkfuz:
dorks = dorks.rstrip("\n")
srcurl = "http://"+srcurl+dorks
requrl = urllib.request.Request(srcurl)
#httpreq = urllib.request.urlopen(requrl)
# Starting the request
try:
httpreq = urllib.request.urlopen(requrl)
except urllib.error.HTTPError as e:
print ("[!] Error code: ", e.code)
print("")
#sys.exit(1)
except urllib.error.URLError as e:
print ("[!] Reason: ", e.reason)
print("")
#sys.exit(1)
#if e.code != 404:
if httpreq.getcode() == 200:
print("\n*****srcurl********\n",srcurl)
return srcurl
So, when I enter the correct website name which has /about.php?id=1, it works fine.
But when I provide the website which has /en/company/news/full.php?Id=232, it first
prints Error code: 404 and then gives me the following error: UnboundLocalError: local
variable 'e' referenced before assignment or UnboundLocalError: local variable 'httpreq' referenced before assignment
I can understand that if the website doesn't have the page which contains /about.php?id=1, it gives Error code: 404 but why it's not going back in the for loop to check the remaining dorks in the text file??? Why it stops here and throws an error?
I want to make a script to find out valid page from just a website address like: www.xyz.com
When the line urllib.request.urlopen(requrl) expression throws an exception, the variable httpreq is never set. You could set it to None before the try statement, then test if it is still None afterwards:
httpreq = None
try:
httpreq = urllib.request.urlopen(requrl)
# ...
if httpreq is not None and httpreq.getcode() == 200:
srcurl = "ANY-WEBSITE"
drkfuz = open("dorks.txt", "r").readlines()
print("\n[+] Number of dork names to be fuzzed:",len(drkfuz))
for dorks in drkfuz:
dorks = dorks.rstrip("\n")
srcurl = "http://"+srcurl+dorks
try:
requrl = urllib.request.Request(srcurl)
if requrl != None and len(requrl) > 0:
try:
httpreq = urllib.request.urlopen(requrl)
if httpreq.getcode() == 200:
print("\n*****srcurl********\n",srcurl)
return srcurl
except:
# Handle exception
pass
except:
# Handle your exception
print "Exception"
Untested code, but it will work logically.