Python time as value - python

I am trying to consume hp nnmi web services. But here you can see I am using milliseconds as value in filter2.value = d_in_ms, and this is not working for me. I am able to see result when I use values like filter2.value = "1493078400000", Please tell me if we can use int values like below:
#!/usr/bin/python
from suds.client import Client
from suds.transport.http import HttpAuthenticated
import datetime
import time
now = datetime.datetime.now()
currenttime = now - datetime.timedelta(hours=12)
epochtime = time.mktime(currenttime.timetuple())
print epochtime
d_in_ms = int(epochtime)*1000
t = HttpAuthenticated(username='xxxxx', password='xxxx')
url = 'http://example.com/IncidentBeanService/IncidentBean?wsdl'
client = Client(url, transport=t)
filter1 = client.factory.create('ns2:condition')
filter1.name = "sourceNodeName"
filter1.operator = "EQ"
filter1.value = "DEVICE"
filter2 = client.factory.create('ns2:condition')
filter2.name = "lastOccurrenceTime"
filter2.operator = "GT"
filter2.value = d_in_ms
filter = client.factory.create('ns2:expression')
filter.operator = "AND"
filter.subFilters = [filter1, filter2]
allincidents = client.service.getIncidents(filter)
print "Nodes in topology:", len(allincidents.item)
for i in allincidents.item[:]:
print i

I am able to see result when I use values like filter2.value = "1493078400000"
According to this statement, it looks filter2.value should be a string. That would suggest that you need to use:
filter2.value = str(d_in_ms)

Related

How to get next Page of ADF Pipeline run details using continuation_token in Azure Data Factory - Databricks?

I am using
adf_client.pipeline_runs.query_by_factory(resourceGroupName,
factoryName, filter_parameters)
method of azure.mgmt.datafactory.DataFactoryManagementClient package to fetch ADF Pipeline Run details.
The response of above function returns 100 pipeline run records at once. Along with the response, it returns continuation_token which, I believe, is supposed to be used to fetch next set/page of records.
I am not sure which function to be used for this. I tried using azure.mgmt.datafactory.models.PipelineRun() function (trial and error) to see if that satisfies the requirement. Unfortunately, it doesn't. MS Documentation is also very abstract to understand.
So, which function in Azure's Python SDK can be used to fetch next page of run records?
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.datafactory import DataFactoryManagementClient
from azure.mgmt.datafactory.models import *
from datetime import datetime, timedelta
from azure.identity import ClientSecretCredential
subscription_id = "b83c1wd3-xxxx-xxxx-xxxx-2b83a074c23f"
rg_name = "My-rg"
df_name = "ktestadf"
tenant_id = "12f978bf-xxxx-xxxx-xxxx-2d7cd011db47"
client_id = "a71ad3ca-xxxx-xxxx-xxxx-af0c2a3fdae1"
client_secret = "Nym7Q~j5YMyxxxxxx3tAk879y9vLrxAQqaI8n"
credential = ClientSecretCredential(
tenant_id=tenant_id ,
client_id=client_id ,
client_secret=client_secret)
adf_client = DataFactoryManagementClient( credential=credential,
subscription_id=subscription_id)
pipe_run = []
dsb = datetime.now()
dsa = dsb - timedelta(hours = 24)
filter_params = RunFilterParameters(last_updated_after=dsa, last_updated_before=dsb)
pipeline_runs = adf_client.pipeline_runs.query_by_factory(resource_group_name=rg_name, factory_name=df_name, filter_parameters = filter_params)
pipe_run.append(pipeline_runs.value)
while (pipeline_runs.continuation_token):
filter_params = RunFilterParameters(last_updated_after=dsa, last_updated_before=dsb,continuation_token = pipeline_runs.continuation_token)
pipeline_runs = adf_client.pipeline_runs.query_by_factory(
resource_group_name=rg_name, factory_name=df_name, filter_parameters=filter_params)
pipe_run.append(pipeline_runs.value)
You get the continuation_token when you have the next page of results, if any remaining results exist, null otherwise.
Here is an example of its usage, however I currently don't have enough pipeline runs to show the token itself.
Now in your case, you have received one, so here is how you can use it.
Considering pipeline_runs is holding the results, pipeline_runs.continuation_token is what we need to get and pass back in another request to get the next page.
Add a simple loop, say a While checking for pipeline_runs.continuation_token exists and requesting next page until the value for token returned is Null - end of the result.
Complete working implementation:
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.datafactory import DataFactoryManagementClient
from azure.mgmt.datafactory.models import *
from datetime import datetime, timedelta
from azure.identity import ClientSecretCredential
subscription_id = "b83c1wd3-xxxx-xxxx-xxxx-2b83a074c23f"
rg_name = "My-rg"
df_name = "ktestadf"
tenant_id = "12f978bf-xxxx-xxxx-xxxx-2d7cd011db47"
client_id = "a71ad3ca-xxxx-xxxx-xxxx-af0c2a3fdae1"
client_secret = "Nym7Q~j5YMyxxxxxx3tAk879y9vLrxAQqaI8n"
credentials = ServicePrincipalCredentials(client_id=client_id, secret=client_secret, tenant=tenant_id)
adf_client = DataFactoryManagementClient(credentials, subscription_id)
filter_params = PipelineRunFilterParameters(last_updated_after=datetime.now() - timedelta(30), last_updated_before=datetime.now() + timedelta(1))
pipeline_runs = adf_client.pipeline_runs.query_by_factory(resource_group_name=rg_name, factory_name=df_name, filter_parameters = filter_params)
for pipeline_run in pipeline_runs.value:
print(pipeline_run)
while (pipeline_runs.continuation_token):
pipeline_runs = adf_client.pipeline_runs.query_by_factory(
resource_group_name=rg_name, factory_name=df_name, filter_parameters=filter_params, continuation_token = pipeline_runs.continuation_token)
print(pipeline_runs.value)
You can optionally not print the earlier call pipeline_runs in for loop, I have them just show in code for reference.

Change method when http request returns something new (python)

An inefficient version of what I'm trying to do is this:
while(True):
dynamic_variable = http(request) # where http request is subject to change values
method(dynamic_variable)
Where method(dynamic_variable isn't guaranteed to finish, and if http(request) returns a different value than method(dynamic_variable) becomes a useless function.
It seems like I should be able to change dynamic_variable more efficiently by having it "automatically" update whenever http(request) changes value.
I think what I want to do is called the "observer pattern" but I'm not quite fluent enough in code to know if that's the correct pattern or how to implement it.
A simple example would be much appreciated!
Edit:
from web3 import Web3
import json
from hexbytes import HexBytes
import numpy as np
import os
import time
INFURA_ROPSTEN_URL = "https://ropsten.infura.io/v3/<api_key>"
# metamask account information
PUBLIC_KEY = "0x3FaD9AccC3A39aDbd9887E82F94602cEA6c7F86f"
PRIVATE_KEY = "myprivatekey"
UNITS_ADDRESS = "units_address"
# from truffle build. For ABI
JSON_PATH = "../truffle/build/contracts/Units.json"
def set_up_web3():
web3 = Web3(Web3.HTTPProvider(INFURA_ROPSTEN_URL))
web3.eth.defaultAccount = PUBLIC_KEY
return web3
def get_units_contract_object():
with open(JSON_PATH, 'r') as json_file:
abi = json.load(json_file)['abi']
return web3.eth.contract(address=UNITS_ADDRESS,abi=abi)
def solve(web3, units):
nonce = np.random.randint(0,1e10)
while True:
challenge_number_hex = HexBytes(units.functions.getChallengeNumber().call()).hex()
my_digest_hex = web3.solidityKeccak(
['bytes32','address','uint256'],
[challenge_number_hex, PUBLIC_KEY, nonce]).hex()
my_digest_number = int(my_digest_hex,0)
target = units.functions.getMiningTarget().call()
if my_digest_number < target:
return (nonce, my_digest_hex)
else:
nonce += 1
def build_transaction(units, nonce, digest_hex, txn_count):
return units.functions.mint(
nonce,
digest_hex
).buildTransaction({
"nonce" : txn_count,
})
if __name__ == "__main__":
web3 = set_up_web3()
txn_count = web3.eth.getTransactionCount(PUBLIC_KEY)
units = get_units_contract_object()
_cycle_goal = 20
_prev_finish = time.time()
_wait_time = 0
while True:
target = units.functions.getMiningTarget().call()
nonce, digest_hex = solve(web3, units)
mint_txn = build_transaction(units, nonce,digest_hex, txn_count)
signed_txn = web3.eth.account.sign_transaction(mint_txn, private_key=PRIVATE_KEY)
txn_address = web3.eth.sendRawTransaction(signed_txn.rawTransaction)
txn_count += 1
print(f"Solution found! nonce={nonce}, digest_hex={digest_hex}")
_finished = time.time()
_elapsed = _finished - _prev_finish
_additional_wait = _cycle_goal - _elapsed
_wait_time += _additional_wait
print(f"Waiting {_wait_time}")
_prev_finish = _finished
time.sleep(_wait_time)
challenge_hex_number is the variable I want to update

Python datetime from gps to local timezone not changing

So I've been looking for a working solution, but I can't just find it...
I currently have a little program which fetches data from my serial device. This works 100%, but I have trouble with some datetime stuff....
The date and time is in UTC, but I want it in another timezone (Europe/Brussels).
Now, I have written some code that works to convert the input strings, but when I try to replace the data in my json with new data, it doesn't do anything...
Full code:
import serial
import string
import pynmea2
import datetime
import pytz
import math
import time
def utc_to_local(utc_date, utc_time):
tz_eastern = pytz.timezone('UTC')
tz_brussels = pytz.timezone('Europe/Brussels')
return tz_eastern.localize(datetime.datetime.strptime(str(utc_date) + str(utc_time), '%Y-%m-%d%H:%M:%S')).astimezone(tz_brussels).strftime("%Y-%m-%d %H:%M:%S")
def knots_to_km(knots):
return round(math.floor(knots * 1.852))
def open_serial_connection():
ser = serial.Serial()
ser.port = "/dev/ttyS0"
ser.baudrate = 9600
ser.timeout = 1
ser.open()
return ser
def readGPS(serialObject):
try:
json = dict()
latitude = 0.0
longitude = 0.0
speed = 0.0
datetimestamp = ''
sentence = serialObject.readline().decode('utf-8')
if sentence.startswith('$GPGGA'):
data = pynmea2.parse(sentence)
latitude = data.latitude
longitude = data.longitude
if sentence.startswith('$GPVTG'):
data = pynmea2.parse(sentence)
speed = data.spd_over_grnd_kmph
if sentence.startswith('$GPRMC'):
data = pynmea2.parse(sentence)
datetimestamp = utc_to_local(data.datestamp, data.timestamp)
json["lat"] = latitude
json["lon"] = longitude
json["speed"] = speed
json["time"] = datetimestamp
if json["lat"] != 0.0 and json["lon"] != 0.0:
return json
except:
pass
# Test code on pi
ser = open_serial_connection()
while True:
data = readGPS(ser)
if data is not None:
print(data)
time.sleep(0.1)
The output here is the following:
{'lat': xx.xxxxxxxxxxxxxx, 'lon': x.xxxx, 'speed': 0.0, 'time': ''}
The X-es are the correct Latitude and Longitude, they also change.
As you can see, the time is empty, but should change according to the code, right?
What am I missing here?
So the issue here was, as mentioned before, the utc_to_local, which wanted to parse "wrong" data and thus crashed the whole program...
Now the working code stays the same as before, except for the readGPS(), this has now a loop for the data, using a set():
def readGPS(serialObject):
try:
result = {}
result_set = set()
request_set = {'GPGGA', 'GPVTG', 'GPRMC'}
while not request_set.issubset(result_set) and len(result_set) != 10: # 10 == max sentences
sentence = serialObject.readline().decode('utf-8')
key = sentence[1:6]
result_set.add(key)
data = pynmea2.parse(sentence)
if key == 'GPGGA':
result["lat"] = data.latitude
result["lon"] = data.longitude
if key == 'GPVTG':
result["speed"] = data.spd_over_grnd_kmph
if key == 'GPRMC':
result["datetime"] = utc_to_local(data.datestamp, data.timestamp)
return result
except:
pass
It is also good practice to not just pass on errors, but log them or some kind of handling, but for my case, I don't really need to have error loggging (except in debugging), since I'll have alot of "wrong" responses.

How can I get my Python Code to restart when the network disconnects

I have a piece of Python Code running as a service that pulls weather data via API.
The code itself runs perfectly fine when everything is hunky dory, ie the network, but I have noticed that sometimes the WiFi on the Pi that is pulling the API data will drop and then the python codes seems to stop.
I have a small line of code providing the most basic of logs, but I would like to improve upon it greatly. The log code just provides me with the datetime.now so I can see when the last time the code ran was.
#!/usr/bin/python3
#import modules
import cymysql
from time import sleep
from urllib.request import urlopen
import json
import datetime
#set MySQl Variables
host = "localhost"
user = "xxx"
password = "xxx"
schema = "xxx"
#connect to MySQL DB
db = cymysql.connect(host, user, password, schema)
curs = db.cursor()
#set api key for DarkSky API
apikey="xxx"
# Latitude & longitude
lati="-26.20227"
longi="28.04363"
# Add units=si to get it in sensible ISO units.
url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si"
#begin infinite loop
while True:
#convert API reading to json and readable array 'weather'
meteo=urlopen(url).read()
meteo = meteo.decode('utf-8')
weather = json.loads(meteo)
#set variables for current weather
cTemp = (weather['currently']['temperature'])
cCond = (weather['currently']['summary'])
cRain1 = (weather['currently']['precipProbability'])
cRain2 = cRain1*100
cIcon = (weather['currently']['icon'])
oaSum = (weather['daily']['summary'])
#print variables - for testing purposes
#print (cTemp)
#print (cCond)
#print (cRain2)
#print (cIcon)
#print (oaSum)
#extract daily data from 'weather' array
daily = (weather['daily']['data'])
#create new arrays for daily variables
listHigh = []
listLow = []
listCond = []
listRain = []
listIcon = []
#set daily variables
for i in daily:
listHigh.append(i['temperatureHigh'])
for i in range(0,len(listHigh)):
high1 = listHigh[0]
high2 = listHigh[1]
high3 = listHigh[2]
high4 = listHigh[3]
high5 = listHigh[4]
high6 = listHigh[5]
high7 = listHigh[6]
high8 = listHigh[7]
for o in daily:
listLow.append(o['temperatureLow'])
for o in range(0,len(listLow)):
low1 = listLow[0]
low2 = listLow[1]
low3 = listLow[2]
low4 = listLow[3]
low5 = listLow[4]
low6 = listLow[5]
low7 = listLow[6]
low8 = listLow[7]
for p in daily:
listCond.append(p['summary'])
for p in range(0,len(listCond)):
cond1 = listCond[0]
cond2 = listCond[1]
cond3 = listCond[2]
cond4 = listCond[3]
cond5 = listCond[4]
cond6 = listCond[5]
cond7 = listCond[6]
cond8 = listCond[7]
for m in daily:
listRain.append(m['precipProbability'])
for m in range(0,len(listRain)):
rain1 = listRain[0]
rain2 = listRain[1]
rain3 = listRain[2]
rain4 = listRain[3]
rain5 = listRain[4]
rain6 = listRain[5]
rain7 = listRain[6]
rain8 = listRain[7]
#convert rain chance to readable percentage
prain1 = rain1*100
prain2 = rain2*100
prain3 = rain3*100
prain4 = rain4*100
prain5 = rain5*100
prain6 = rain6*100
prain7 = rain7*100
prain8 = rain8*100
for l in daily:
listIcon.append(l['icon'])
for l in range (0,len(listIcon)):
icon1 = listIcon[0]
icon2 = listIcon[1]
icon3 = listIcon[2]
icon4 = listIcon[3]
icon5 = listIcon[4]
icon6 = listIcon[5]
icon7 = listIcon[6]
icon8 = listIcon[7]
#print daily variables - for testing purposes
#print (high1)
#print (low1)
#print (cond1)
#print (prain1)
#print (icon1)
#print (high2)
#print (low2)
#print (cond2)
#print (prain2)
#print (icon2)
#update data in DataBase
try:
sql_update_query = """UPDATE weather SET current_temp = %s, cur$
varis = (cTemp, cCond, cRain2, cIcon, high1, low1, cond1, prain$
curs.execute(sql_update_query, varis)
db.commit()
except db.Error as error:
print("Error: {}".format(error))
db.rollback()
#write date to log file
with open ("/home/pi/CoRo/Projects/WeatherMan/weatherlog.txt", mode="w") as file:
file.write('Last Data was pulled at: %s' %(datetime.datetime.now()))
#set loop to sleep for 10 minutes and go again
sleep(600)
I understand that the Database Code is snipped, but it is just the variables being put in to the database, which I can see works.
However if the network disconnects, the code stops and the database is left with the last polled API data.
How would I restart the python code if the API get fails?
Thanks in advance,
You could rewrite the portion of your code that pulls the weather data as a function or separate module. This would allow you to call it only when the network connection is working. Some pseudo code below:
if network_connection:
pull_weather_data()
else:
do_something()
do_something() could be an effort to reconnect to the network, such as resetting your network adapter.
You could determine the state of the network connection by trying to ping your router or an external IP like one of Google's DNS server (8.8.8.8 or 8.8.4.4).
To avoid nested loops you could use the continue clause. For example:
while True:
if network_connection:
pull_weather_data()
else:
reset_network_connection()
time.sleep(180) # Sleep for 3 minutes.
continue
The continue will send the interpreter back to the start of the while loop. From there it will check the network connection and either pull data or reset the network connection and sleep for another 3 minutes.
Using Quernons answer above the code has been edited as follows:
#!/usr/bin/python3
#import modules
import os
import cymysql
from time import sleep
from urllib.request import urlopen
import json
import datetime
#set MySQl Variables
host = "localhost"
user = "xxx"
password = "xxx"
schema = "xxx"
#connect to MySQL DB
db = cymysql.connect(host, user, password, schema)
curs = db.cursor()
#set api key for DarkSky API
apikey="xxx"
# Latitude & longitude
lati="-26.20227"
longi="28.04363"
# Add units=si to get it in sensible ISO units not stupid Fahreneheit.
url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si"
#begin infinite loop
while True:
#function to check if there is an internet connection
def check_ping():
hostname = "8.8.8.8"
response = os.system("ping -c 1 " + hostname)
#and then check the response...
if response == 0:
pingstatus = 0
else:
pingstatus = 1
return pingstatus
networkstatus = check_ping()
#print check_ping() - for testing purposes
#print (networkstatus)
#function to pull weather data from API
def get_weather():
#insert weather data here with no changes
if networkstatus == 0:
get_weather()
else:
print ("Resetting Network Adapters")
dwnnw = 'ifconfig wlan0 down'
upnw = 'ifconfig wlan0 up'
os.system(dwnnw)
os.system(upnw)
sleep(180)
continue

Parsing Json with multiple "levels" with Python

I'm trying to parse a json file from an api call.
I have found this code that fits my need and trying to adapt it to what I want:
import math, urllib2, json, re
def download():
graph = {}
page = urllib2.urlopen("http://fx.priceonomics.com/v1/rates/?q=1")
jsrates = json.loads(page.read())
pattern = re.compile("([A-Z]{3})_([A-Z]{3})")
for key in jsrates:
matches = pattern.match(key)
conversion_rate = -math.log(float(jsrates[key]))
from_rate = matches.group(1).encode('ascii','ignore')
to_rate = matches.group(2).encode('ascii','ignore')
if from_rate != to_rate:
if from_rate not in graph:
graph[from_rate] = {}
graph[from_rate][to_rate] = float(conversion_rate)
return graph
And I've turned it into:
import math, urllib2, json, re
def download():
graph = {}
page = urllib2.urlopen("https://bittrex.com/api/v1.1/public/getmarketsummaries")
jsrates = json.loads(page.read())
for pattern in jsrates['result'][0]['MarketName']:
for key in jsrates['result'][0]['Ask']:
matches = pattern.match(key)
conversion_rate = -math.log(float(jsrates[key]))
from_rate = matches.group(1).encode('ascii','ignore')
to_rate = matches.group(2).encode('ascii','ignore')
if from_rate != to_rate:
if from_rate not in graph:
graph[from_rate] = {}
graph[from_rate][to_rate] = float(conversion_rate)
return graph
Now the problem is that there is multiple level in the json "Result > 0, 1,2 etc"
json screenshot
for key in jsrates['result'][0]['Ask']:
I want the zero to be able to be any number, I don't know if thats clear.
So I could get all the ask price to match their marketname.
I have shortened the code so it doesnt make too long of a post.
Thanks
PS: sorry for the english, its not my native language.
You could loop through all of the result values that are returned, ignoring the meaningless numeric index:
for result in jsrates['result'].values():
ask = result.get('Ask')
if ask is not None:
# Do things with your ask...

Categories

Resources