I'm very new to Python (with most of my previous programming experience being in intermediate C++ and Java) and am trying to develop a script which will read sensor data and log it to a .csv file. To do this I created separate classes for the code-- one will read the sensor data and output it to the console, while the other is supposed to take that data and log it-- and combined them together into a master script containing each class. Separately, they work perfectly, but together only the sensorReader class functions. I am trying to get each class to run in its own thread, while passing the sensor data from the first class (sensorReader) to the second class (csvWriter) as well. I've posted some of my pseudocode below, but I'd be happy to clarify any questions with the actual source code if needed.
import time
import sensorStuff
import csv
import threading
import datetime
class sensorReader:
# Initializers for the sensors.
this.code(initializes the sensors)
while True:
try:
this.code(prints the sensor data to the console)
this.code(throws exceptions)
this.code(waits 60 seconds)
class csvWriter:
this.code(fetches the date and time)
this.code(writes the headers for the excel sheet once)
while True:
this.code(gets date and time)
this.code(writes the time and one row of data to excel)
this.code(writes a message to console then repeats every minute)
r = sensorReader()
t = threading.Thread(target = r, name = "Thread #1")
t.start()
t.join
w = csvWriter()
t = threading.Thread(target = w, name = "Thread #2")
t.start()
I realize the last part doesn't really make sense, but I'm really punching above my weight here, so I'm not even sure why only the first class works and not the second, let alone how to implement threading for multiple classes. I would really appreciate it if anyone could point me in the right direction.
Thank you!
EDIT
I've decided to put up the full source code:
import time
import board
import busio
import adafruit_dps310
import adafruit_dht
import csv
import threading
import datetime
# import random
class sensorReader:
# Initializers for the sensors.
i2c = busio.I2C(board.SCL, board.SDA)
dps310 = adafruit_dps310.DPS310(i2c)
dhtDevice = adafruit_dht.DHT22(board.D4)
while True:
# Print the values to the console.
try:
global pres
pres = dps310.pressure
print("Pressure = %.2f hPa"%pres)
global temperature_c
temperature_c = dhtDevice.temperature
global temperature_f
temperature_f = temperature_c * (9 / 5) + 32
global humidity
humidity = dhtDevice.humidity
print("Temp: {:.1f} F / {:.1f} C \nHumidity: {}% "
.format(temperature_f, temperature_c, humidity))
print("")
# Errors happen fairly often with DHT sensors, and will occasionally throw exceptions.
except RuntimeError as error:
print("n/a")
print("")
# Waits 60 seconds before repeating.
time.sleep(10)
class csvWriter:
# Fetches the date and time for future file naming and data logging operations.
starttime=time.time()
x = datetime.datetime.now()
# Writes the header for the .csv file once.
with open('Weather Log %s.csv' % x, 'w', newline='') as f:
fieldnames = ['Time', 'Temperature (F)', 'Humidity (%)', 'Pressure (hPa)']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writeheader()
# Fetches the date and time.
while True:
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
# Writes incoming data to the .csv file.
with open('Weather Log %s.csv', 'a', newline='') as f:
fieldnames = ['TIME', 'TEMP', 'HUMI', 'PRES']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writerow({'TIME' : current_time, 'TEMP' : temperature_f, 'HUMI' : humidity, 'PRES' : pres})
# Writes a message confirming the data's entry into the log, then sets a 60 second repeat cycle.
print("New entry added.")
time.sleep(10.0 - ((time.time() - starttime) % 10.0)) # Repeat every ten seconds.
r = sensorReader()
t = threading.Thread(target = r, name = "Thread #1")
t.start()
t.join
w = csvWriter()
t = threading.Thread(target = w, name = "Thread #2")
t.start()
It would work better structured like this. If you put the first loop in a function, you can delay its evaluation until you're ready to start the thread. But in a class body it would run immediately and you never get to the second definition.
def sensor_reader():
# Initializers for the sensors.
this.code(initializes the sensors)
while True:
try:
this.code(prints the sensor data to the console)
except:
print()
this.code(waits 60 seconds)
threading.Thread(target=sensor_reader, name="Thread #1", daemon=True).start()
this.code(fetches the date and time)
this.code(writes the headers for the excel sheet once)
while True:
this.code(gets date and time)
this.code(writes the time and one row of data to excel)
this.code(writes a message to console then repeats every minute)
I made it a daemon so it will stop when you terminate the program. Note also that we only needed to create one thread, since we already have the main thread.
Related
This is my take for async based on this
How to use AsyncHTTPProvider in web3py?
article. However, upon running this code it executes like a
synchronous function.
For web3.js, there is a support for batch request
https://dapp-world.com/smartbook/web3-batch-request-Eku8 . However,
web3.py does not have any.
I am using Ethereum Alchemy API which supports about 19 API calls per
second.
I have about 1000 Ethereum Addresses
How do I modify the code
such that I am able to batch 19 addresses per second?
from web3 import Web3
from web3.eth import AsyncEth
import time
import pandas as pd
import aiohttp
import asyncio
alchemy_url = "https://eth-mainnet.g.alchemy.com/v2/zCTn-wyjipF5DvGFVNEx_XqCKZakaB57"
w3 = Web3(Web3.AsyncHTTPProvider(alchemy_url), modules={'eth': (AsyncEth,)}, middlewares=[])
start = time.time()
df = pd.read_csv('Ethereum/ethereumaddresses.csv')
Wallet_Address=(df.loc[:,'Address'])
#Balance_storage = []
session_timeout = aiohttp.ClientTimeout(total=None)
async def get_balances():
for address in Wallet_Address:
balance = await w3.eth.get_balance(address)
print(address, balance)
asyncio.run(get_balances())
end = time.time()
total_time = end - start
print(f"It took {total_time} seconds to make {len(Wallet_Address)} API calls")
I think my idea isn't the best but you can use it as a temporary solution.
For this, you have to use ThreadpoolExecutor.
I executed a benchmark and found these results:
Without ThreadpoolExecutor, using BSC Public RPC, just running in for loop, takes more than 3 minutes to finish the process.
Click here to see the output of test 1
With ThreadpoolExecutor, BSC Public RPC, and 100ms Delay using time.sleep(0.1), finishes in less than 40 seconds as you can see in the next image. Click here to see the output of test 2
With ThreadpoolExecutor, using Quicknode, and 100ms Delay, finishes in 35 seconds. Click here to see the output of test 3
Doing simple math (1000 wallets / 19 calls per sec.) we know your process needs to take at least something close to 50 seconds. Try running at 100ms delays and if it doesn't work you can increase more delay.
One of the problems with using time.sleep is if you are using GUI or something like that which we can't pause (because GUI will freeze) during the process. (I think you can use multiprocessing to bypass this xD)
The second problem is that doing this will probably change each address's position in CSV. (You can attribute _id or something like that for each address to organize with For Loops after ends.)
Code: Working Good at BSC (Just change the RPC). This code will find all balances and store them inside self.data (defaultdict). After this, save it in new CSV file called "newBalances.csv" (You can change this)
from collections import defaultdict
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from web3 import Web3
import pandas as pd
import time
class multiGetBalanceExample():
def __init__(self):
self.initialtime = datetime.now() #initial time
#=== Setup Web3 ===#
self.bsc = "https://bsc-dataseed.binance.org/" #rpc (change this)
self.web3 = Web3(Web3.HTTPProvider(self.bsc)) #web3 connect
#=== Loading Csv file ===#
self.df = pd.read_csv(r"./Ethereum/ethereumaddresses.csv")
self.wallet_address=(self.df.loc[:,'Address'])
#=== Setup Temporary Address/Balance Save Defaultdict ===#
self.data = defaultdict(list)
#=== Start ===#
self.start_workers(self.data)
#=== Finish ===#
self.saveCsv() #saving in new csv file
self.finaltime = datetime.now() #end time
print(f"\nFinished! Process takes: {self.finaltime - self.initialtime}")
def start_workers(self, data, workers=10):
with ThreadPoolExecutor(max_workers=workers) as executor:
[executor.submit(self.getBalances, _data=data, _from=0, _to=101)]
[executor.submit(self.getBalances, _data=data, _from=101, _to=201)]
[executor.submit(self.getBalances, _data=data, _from=201, _to=301)]
[executor.submit(self.getBalances, _data=data, _from=301, _to=401)]
[executor.submit(self.getBalances, _data=data, _from=401, _to=501)]
[executor.submit(self.getBalances, _data=data, _from=501, _to=601)]
[executor.submit(self.getBalances, _data=data, _from=601, _to=701)]
[executor.submit(self.getBalances, _data=data, _from=701, _to=801)]
[executor.submit(self.getBalances, _data=data, _from=801, _to=901)]
[executor.submit(self.getBalances, _data=data, _from=901, _to=1000)]
return data
def getBalances(self, _data, _from, _to):
for i in range (_from, _to):
# == Getting Balances from each wallet == #
get_balance = self.web3.eth.get_balance(self.wallet_address[i])
# == Appending in self.data == #
_data["Address"].append(self.wallet_address[i])
_data["Balance"].append(get_balance)
# == Print and time.sleep(100ms) == #
print(f"Found: {self.wallet_address[i], get_balance}\n") #printing process.
time.sleep(0.1) #change this conform to your max limit (in my test 100ms takes 40 seconds to finish.)
return _data
def saveCsv(self):
#== Creating new CSV File ==#
headers = ["Address","Balance"]
new_df = pd.DataFrame(columns=headers)
new_df["Address"] = self.data["Address"]
new_df["Balance"] = self.data["Balance"]
new_df.to_csv(r"./Ethereum/newBalances.csv", index=False) #save
multiGetBalanceExample()
so I've been thinking about this for a couple days now and I cant figure it out, I've searched around but couldn't find the answer I was looking for, so any help would be greatly appreciated.
Essentially what I am trying to do is call a method on a group of objects in my main thread from a separate thread, just once after 2 seconds and then the thread can exit, I'm just using threading as a way of creating a non-blocking 2 second pause (if there are other ways of accomplishing this please let me know.
So I have a pyqtplot graph/plot that updates from a websocket stream and the gui can only be updated from the thread that starts it (the main one).
What happens is I open a websocket stream fill up a buffer for about 2 seconds, make an REST request, apply the updates from the buffer to the data from the REST request and then update the data/plot as new messages come in. Now the issue is I can't figure out how to create a non blocking 2 second pause in the main thread without creating a child thread. If I create a child thread and pass the object that contains the dictionary I want to update after 2 seconds, I get issues regarding updating the plot from a different thread. So what I THINK is happening is when that new spawned thread is spawned the reference to the object I want to update is actually the object itself, or the data (dictionary) containing the update data is now in a different thread as the gui and that causes issues.
open websocket --> start filling buffer --> wait 2 seconds --> REST request --> apply updates from buffer to REST data --> update data as new websocket updates/messages come in.
Unfortunately the websocket and gui only start when you run pg.exec() and you can't break them up to start individually, you create them and then start them together (or at least I have failed to find a way to start them individually, alternatively I also tried using a separate library to handle websockets however this requires starting a thread for incoming messages as well)
This is the minimum reproducible example, sorry it's pretty long but I couldn't really break it down anymore without removing required functionality as well as preserving context:
import json
import importlib
from requests.api import get
import functools
import time
import threading
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
QtWebSockets = importlib.import_module(pg.Qt.QT_LIB + '.QtWebSockets')
class coin():
def __init__(self):
self.orderBook = {'bids':{}, 'asks':{}}
self.SnapShotRecieved = False
self.last_uID = 0
self.ordBookBuff = []
self.pltwgt = pg.PlotWidget()
self.pltwgt.show()
self.bidBar = pg.BarGraphItem(x=[0], height=[1], width= 1, brush=(25,25,255,125), pen=(0,0,0,0))
self.askBar = pg.BarGraphItem(x=[1], height=[1], width= 1, brush=(255,25,25,125), pen=(0,0,0,0))
self.pltwgt.addItem(self.bidBar)
self.pltwgt.addItem(self.askBar)
def updateOrderBook(self, message):
for side in ['a','b']:
bookSide = 'bids' if side == 'b' else 'asks'
for update in message[side]:
if float(update[1]) == 0:
try:
del self.orderBook[bookSide][float(update[0])]
except:
pass
else:
self.orderBook[bookSide].update({float(update[0]): float(update[1])})
while len(self.orderBook[bookSide]) > 1000:
del self.orderBook[bookSide][(min(self.orderBook['bids'], key=self.orderBook['bids'].get)) if side == 'b' else (max(self.orderBook['asks'], key=self.orderBook['asks'].get))]
if self.SnapShotRecieved == True:
self.bidBar.setOpts(x0=self.orderBook['bids'].keys(), height=self.orderBook['bids'].values(), width=1 )
self.askBar.setOpts(x0=self.orderBook['asks'].keys(), height=self.orderBook['asks'].values(), width=1 )
def getOrderBookSnapshot(self):
orderBookEncoded = get('https://api.binance.com/api/v3/depth?symbol=BTCUSDT&limit=1000')
if orderBookEncoded.ok:
rawOrderBook = orderBookEncoded.json()
orderBook = {'bids':{}, 'asks':{}}
for orders in rawOrderBook['bids']:
orderBook['bids'].update({float(orders[0]): float(orders[1])})
for orders in rawOrderBook['asks']:
orderBook['asks'].update({float(orders[0]): float(orders[1])})
last_uID = rawOrderBook['lastUpdateId']
while self.ordBookBuff[0]['u'] <= last_uID:
del self.ordBookBuff[0]
if len(self.ordBookBuff) == 0:
break
if len(self.ordBookBuff) >= 1 :
for eachUpdate in self.ordBookBuff:
self.last_uID = eachUpdate['u']
self.updateOrderBook(eachUpdate)
self.ordBookBuff = []
self.SnapShotRecieved = True
else:
print('Error retieving order book.') #RESTfull request failed
def on_text_message(message, refObj):
messaged = json.loads(message)
if refObj.SnapShotRecieved == False:
refObj.ordBookBuff.append(messaged)
else:
refObj.updateOrderBook(messaged)
def delay(myObj):
time.sleep(2)
myObj.getOrderBookSnapshot()
def main():
pg.mkQApp()
refObj = coin()
websock = QtWebSockets.QWebSocket()
websock.connected.connect(lambda : print('connected'))
websock.disconnected.connect(lambda : print('disconnected'))
websock.error.connect(lambda e : print('error', e))
websock.textMessageReceived.connect(functools.partial(on_text_message, refObj=refObj))
url = QtCore.QUrl("wss://stream.binance.com:9443/ws/btcusdt#depth#1000ms")
websock.open(url)
getorderbook = threading.Thread(target = delay, args=(refObj,), daemon=True) #, args = (lambda : websocketThreadExitFlag,)
getorderbook.start()
pg.exec()
if __name__ == "__main__":
main()
I am designing a new time/score keeper for an air hockey table using a PyBoard as a base. My plan is to use a TM1627 (4x7seg) for time display, rotary encoder w/ button to set the time, IR and a couple 7segs for scoring, IR reflector sensors for goallines, and a relay to control the fan.
I'm getting hung up trying to separate the clock into its own thread while focusing on reading the sensors. Figured I could use uasyncio to split everything up nicely, but I can't figure out where to put the directives to spin off a thread for the clock and eventually the sensors.
On execution right now, it appears the rotary encoder is assigned the default value, no timer is started, the encoder doesn't set the time, and the program returns control to REPL rather quickly.
Prior to trying to async everything, I had the rotary encoder and timer working well. Now, not so much.
from rotary_irq_pyb import RotaryIRQ
from machine import Pin
import tm1637
import utime
import uasyncio
async def countdown(cntr):
# just init min/sec to any int > 0
min = sec = 99
enableColon = True
while True:
# update the 4x7seg with the time remaining
min = abs(int((cntr - utime.time()) / 60))
sec = (cntr - utime.time()) % 60
#print(str(), str(sec), sep=':' )
enableColon = not enableColon # alternately blink the colon
tm.numbers(min, sec, colon = enableColon)
if(min + sec == 0): # once both reach zero, break
break
await uasyncio.sleep(500)
X1 = pyb.Pin.board.X1
X2 = pyb.Pin.board.X2
Y1 = pyb.Pin.board.Y1
Y2 = pyb.Pin.board.Y2
button = pyb.Pin(pyb.Pin.board.X3, pyb.Pin.IN)
r = RotaryIRQ(pin_num_clk=X1,
pin_num_dt=X2,
min_val=3,
max_val=10,
reverse=False,
range_mode=RotaryIRQ.RANGE_BOUNDED)
tm = tm1637.TM1637(clk = Y1, dio = Y2)
val_old = val_new = 0
while True:
val_new = r.value()
if(val_old != val_new):
val_old = val_new
print(str(val_new))
if(button.value()): # save value as minutes
loop = uasyncio.get_event_loop()
endTime = utime.time() + (60 * val_new)
loop.create_task(countdown(endTime))
r.close() # Turn off Rotary Encoder
break
#loop = uasyncio.get_event_loop()
#loop.create_task(countdown(et))
#loop.run_until_complete(countdown(et))
I'm sure it's something simple, but this is the first non-CLI python script I've done, so I'm sure there are a bunch of silly mistakes. Any assistance would be appreciated.
I guess my computer's battery of bios got dead; consequently the system time is never accurate, it is sometimes stuck at 1 PM with a faulty date, so I came up with this code with the idea of fetching universal time from an api and setting my computer's time accordingly.
My problem is how to make my code run in the background without printing any ping results to the screen or showing anything. I need it to function as a PID and keep alive as long as the computer is on.
PS: I am using Windows 7
from json import loads
from urllib.request import urlopen
import logging
from win32api import SetSystemTime
from datetime import datetime
from time import sleep
import re
from os import system
while True:
# connection is dead with 1, connection is alive with 0
connection_is_dead = 1
while connection_is_dead != 0:
connection_is_dead = system('ping -n 1 google.com')
logging.basicConfig(level=logging.INFO)
logging.disable(level=logging.INFO) # logging off
logging.info('Connection is up...')
try:
with urlopen("http://worldtimeapi.org/api/timezone/africa/tunis") as time_url:
text = time_url.read()
logging.info('Time api imported...')
mytime_dict = loads(text)
time_now = mytime_dict['datetime']
logging.info(time_now)
time_stamp = re.compile(r'(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})\.(\d+)[+|-].*')
time_match = list(map(int, re.match(time_stamp, time_now).groups()))
# winapi32.SetSystemTime(year, month , dayOfWeek , day , hour , minute , second , millseconds )
dayOfWeek = datetime(*time_match[:3]).weekday()
SetSystemTime( *time_match[:2],dayOfWeek, *time_match[2:6], 0)
logging.info('Time updated successfully...')
#system('exit')
sleep(1800) # 3O min / reset time every 30 min
except:
logging.error('Time was not updated due to an unexpected error... ')
#system('exit')
I have method in my tkinter application. This method is used to export data into CSV from another application. The loop for export the data is very heavy. It takes days of time to complete.
I just came across the multi threading concept. Its kind of difficult to understand, I spent an entire day on this but couldn't achieve anything. Below is the code I use in my loop. Can this be handled by multiple threads without freezing my tkinter UI?
I have a Label that shows the number of records(cells) exported, in the tkinter window.
def export_cubeData(self):
exportPath = self.entry_exportPath.get()
for b in itertools.product(*(k.values())):
self.update()
if (self.flag == 0):
list1 = list()
for pair in zip(dims, b):
list1.extend(pair)
list1.append(self.box_value.get())
mdx1 = mdx.format(*temp, *list1)
try:
data = tm1.cubes.cells.execute_mdx(mdx1)
data1 = Utils.build_pandas_dataframe_from_cellset(data)
final_df = final_df.append(data1)
cellCount = tm1.cubes.cells.execute_mdx_cellcount(mdx1)
finalcellCount = finalcellCount + cellCount
self.noOfRecordsProcessed['text'] = finalcellCount
except:
pass
else:
tm.showinfo("Export Interrupted", "Data export has been cancelled")
return
final_df.to_csv(exportPath)
print(time.time() - start)
tm.showinfo("info", "Data export has been completed")
self.noOfRecordsProcessed['text'] = '0'