Outcome 1 required:
The first batch of code below is in its working form.
Please assist in creating a function " def Calculations():" inclusive of all the list calculations to return the same results with the static list. With the calculations in proper functions I will be able to refine the problem and might be able to move forward ...
Outcome 2 required for those that want to go in depth...:
When I run the code on a live list that appends every x intervals it stalls the information feed. I believe it could be the creating of the appending lists in batches of increasing numbers... but I don't have a solution for it... Below is the working code...
I am getting my live data from Binance in a appending list of closes only for those who would like to test it in the live status...
The data could be coming from any source , does not need to be Binance as long as its an appending list of closes in float format...
See code below...
import itertools
l = [16.329,16.331, 16.3705, 16.3965, 16.44, 16.4227, 16.4028, 16.37, 16.3829, 16.3482, 16.3614, 16.4191, 16.4008, 16.4048, 16.4076, 16.3724, 16.3599, 16.3872, 16.3794, 16.3528, 16.3886, 16.3904, 16.3815, 16.3864, 16.4254, 16.4411, 16.4151, 16.4338, 16.4212, 16.3819, 16.2857, 16.2703, 16.2408, 16.1938, 16.2038, 16.2035, 16.217, 16.2374, 16.2414, 16.2238, 16.1787, 16.2725, 16.2964, 16.3155, 16.238, 16.2149, 16.2992, 16.3568, 16.2793, 16.2467, 16.312, 16.3117, 16.3017, 16.3465, 16.3882, 16.3698, 16.307, 16.3328, 16.3311, 16.3466, 16.3382, 16.3703, 16.3502, 16.3661, 16.38, 16.3972, 16.4141, 16.393, 16.3769, 16.3683, 16.4136, 16.3774, 16.3709, 16.3179, 16.3019, 16.3149, 16.2838, 16.2689, 16.2602, 16.2679, 16.2921, 16.312, 16.3158, 16.3198, 16.2955, 16.303, 16.327, 16.356, 16.313, 16.3, 16.2806, 16.2634, 16.2856, 16.2702, 16.2136, 16.2782, 16.276, 16.2231, 16.2255, 16.1314, 16.0796, 16.1192, 16.0977, 16.1358, 16.1408, 16.1703]
#### VARIABLES & SETTINGS ####
dataingestperiod = 17
original_list_count = len(l)
timeframed_list = l[-dataingestperiod:]
timeframed_list_count = len(timeframed_list)
def groupSequence(x):
it = iter(x)
prev, res = next(it), []
while prev is not None:
start = next(it, None)
if start and start > prev:
res.append(prev)
elif res:
yield list(res + [prev])
res = []
prev = start
def divbyZero(increasingcount,decreasingcount):
try: return increasingcount/decreasingcount
except ZeroDivisionError: return 0
def divbyZeroMain(increasingcountMain,decreasingcountMain):
try: return increasingcountMain/decreasingcountMain
except ZeroDivisionError: return 0
#### TIMEFRAMED LIST CALCULATIONS#####
listA_1 = (list(groupSequence(timeframed_list))) # obtain numbers in increasing sequence
# print(len(listA_1)) # number of increases in mixed format
listA = list(itertools.chain.from_iterable(listA_1)) # remove double brackets to enable list count
increasingcount = len(listA)
decreasingcount = timeframed_list_count - increasingcount
trend = divbyZero(increasingcount,decreasingcount)
#### MAIN APPENDING LIST CALCULATIONS #####
listMain_1 = (list(groupSequence(l)))
listMain = list(itertools.chain.from_iterable(listMain_1))
increasingcountMain = len(listMain)
decreasingcountMain = original_list_count - increasingcountMain
trendMain = divbyZeroMain(increasingcountMain,decreasingcountMain)
###Timeframed list increases-only appending to max last"dataingestperiod" perhaps problem on live feed data....###
increase_list_timeframed = []
for x in listA:
increase_list_timeframed.append(x)
### Main list increases only appending...####
increase_list_Main = []
for x in listMain:
increase_list_Main.append(x)
###Prints ON TIMEFRAMED LIST ####
print ("---------------")
print ("---------------")
print ("Timeframed list count set at max {}".format(timeframed_list_count))
print ("Count of decreases in timeframed list is {}".format(decreasingcount))
print ("Count of increases in timeframed list is {}".format(increasingcount))
print ("Current timeframed trend is {}".format(trend))
print ("---------------")
print ("---------------")
###Prints ON MAIN LIST ####
print ("Main appending list count so far is {}".format(original_list_count))
print ("Count of decreases in Main appending list is {}".format(decreasingcountMain))
print ("Count of increases in Main appending list is {}".format(increasingcountMain))
print ("Current Main trend is {}".format(trendMain))
The actual code as live to binance is listed below with the above code included. You also need to install "pip install python-binance" and "pip install websocket_client" got the binance access code from ParttimeLarry
Outcome 2 required: When run live that all calculations run uninterruptedly...
import itertools
import copy
import websocket, json, pprint, talib, numpy
from binance.client import Client
from binance.enums import *
#DATA FROM WEBSOCKETS########
SOCKET = "wss://stream.binance.com:9443/ws/linkusdt#kline_1m"
API_KEY = 'yourAPI_KEY'
API_SECRET ='yourAPI_Secret'
closes = [] # created for RSI indicator only using closes
in_position = False
client = Client(API_KEY, API_SECRET) # tld='us'
def order(side, quantity, symbol,order_type=ORDER_TYPE_MARKET):
try:
print("sending order")
order = client.create_order(symbol=symbol, side=side, type=order_type, quantity=quantity)
print(order)
except Exception as e:
print("an exception occured - {}".format(e))
return False
return True
def on_open(ws):
print('opened connection')
# start_time = datetime.datetime.now().time().strftime('%H:%M:%S')
# try:
# file = open("C:/GITPROJECTS/binance-bot/csvstats.txt","+a")
# file.write("New Session Open Connection Start at time {}\n".format(datetime.datetime.now())))
# finally:
# file.close()
def on_close(ws):
print('closed connection')
def on_message(ws, message):
global closes, in_position
print('received message')
json_message = json.loads(message)
pprint.pprint(json_message)
candle = json_message['k']
is_candle_closed = candle['x']
close = candle['c']
if is_candle_closed:
print("candle closed at {}".format(close))
closes.append(float(close))
print("closes")
print(closes)
####################################################################################
########CALCULATIONS ON INDICATORS #################################################
# dataingestperiod = 5
l = copy.deepcopy(closes)
maincountofcloses = len(l)
print ("Total count of closes so far {}".format(maincountofcloses))
#### VARIABLES & SETTINGS ####
l = copy.deepcopy(closes)
dataingestperiod = 3
original_list_count = len(l)
#print ("Main list count so far is {}".format(original_list_count))
timeframed_list = l[-dataingestperiod:]
timeframed_list_count = len(timeframed_list)
#print ("Timeframed list count set at max {}".format(timeframed_list_count))
def groupSequence(x):
it = iter(x)
prev, res = next(it), []
while prev is not None:
start = next(it, None)
if start and start > prev:
res.append(prev)
elif res:
yield list(res + [prev])
res = []
prev = start
def divbyZero(increasingcount,decreasingcount):
try: return increasingcount/decreasingcount
except ZeroDivisionError: return 0
def divbyZeroMain(increasingcountMain,decreasingcountMain):
try: return increasingcountMain/decreasingcountMain
except ZeroDivisionError: return 0
#### TIMEFRAMED LIST CALCULATIONS#####
listA_1 = (list(groupSequence(timeframed_list))) # obtain numbers in increasing sequence
# print(len(listA_1)) # number of increases in mixed format
listA = list(itertools.chain.from_iterable(listA_1)) # remove double brackets to enable list count
increasingcount = len(listA)
decreasingcount = timeframed_list_count - increasingcount
trend = divbyZero(increasingcount,decreasingcount)
#### MAIN APPENDING LIST CALCULATIONS #####
listMain_1 = (list(groupSequence(l)))
listMain = list(itertools.chain.from_iterable(listMain_1))
increasingcountMain = len(listMain)
decreasingcountMain = original_list_count - increasingcountMain
trendMain = divbyZeroMain(increasingcountMain,decreasingcountMain)
increase_list_timeframed = []
for x in listA:
increase_list_timeframed.append(x)
increase_list_Main = []
for x in listMain:
increase_list_Main.append(x)
###Prints ON TIMEFRAMED LIST ####
print ("---------------")
print ("---------------")
print ("Timeframed list count set at max {}".format(timeframed_list_count))
print ("Count of decreases in timeframed list is {}".format(decreasingcount))
print ("Count of increases in timeframed list is {}".format(increasingcount))
print ("Current timeframed trend is {}".format(trend))
print ("---------------")
print ("---------------")
###Prints ON MAIN LIST ####
print ("Main appending list count so far is {}".format(original_list_count))
print ("Count of decreases in Main appending list is {}".format(decreasingcountMain))
print ("Count of increases in Main appending list is {}".format(increasingcountMain))
print ("Current Main trend is {}".format(trendMain))
# threading.Timer(10.0, divbyZeroMain).start()
# threading.Timer(10.0, divbyZero).start()
# ws = websocket.WebSocketApp(SOCKET, on_open=on_open, on_close=on_close, on_message=on_message)
# ws.run_forever()
ws = websocket.WebSocketApp(SOCKET, on_open=on_open, on_close=on_close, on_message=on_message)
ws.run_forever()
Related
I have a python code which works for doing data analytics from csv file. I want to run my python code to be run periodically on a docker container. Every 15 seconds, it should automatically look at a folder A, if there is a csv file in it, it should process it and put an html report with the same name in folder B.
HERE IS MY PYTHON CODE .
#This program pulls data from csv file and displays it as html file.
#csv file contains device names, card names and temperatures of cards
#The html file contains: how many devices, how many cards are in the system, which
#device has the highest temperature card, and in the table below is how many cards are
#there in summary for each device, how many have a temperature of 70 and above, the
#highest and average card what are the temperatures
#NOTE: The print functions in the program are written for trial purposes.
from enum import unique
from re import A, T
import pandas as pd
from prettytable import PrettyTable, PLAIN_COLUMNS
table = PrettyTable() #create a table for device
table2 = PrettyTable() #create a table for summary
table.field_names = ["Device -", "Total # of Cards - ", "High Temp. Cards # - ", "Max Temperature - ", "Avg. Temperature "]
table2.field_names = [" "," "]
df = pd.read_csv("cards.csv", sep=';', usecols = ['Device','Card','Temperature'])""", index_col=["Device","Card"]"""
print(type(df))
print(df["Device"].nunique(),"\n\n") # number of unique server
total_devices = df["Device"].nunique() # NUMBER OF DEVICES IN DIFFERENT TYPES
print(total_devices)
print(df["Device"].loc[1],"\n\n")
print(df['Temperature'].max(),"\n\n")
maxTemp = df['Temperature'].max() #finding max temperature
print("total card ", )
i= 0
j=1
#Finding the card with the max temperature and the server where the card is located
while j > 0:
if df["Temperature"].loc[i] == df["Temperature"].max():
print(df["Device"].loc[i])
print(df["Card"].loc[i])
deviceName = df["Device"].loc[i]
cardName = df["Card"].loc[i]
j= 0
else :
i = i+1
dev_types = df["Device"].unique() # Server's names
print("\n\n")
newstr = cardName + "/" + deviceName
#Summary tablosunu olusturma
table2.add_row(["Total Devices ", total_devices] )
table2.add_row(["Total Cads ", len(df["Card"])])
table2.add_row(["Max Card Temperature ", df["Temperature"].max()])
table2.add_row(["Hottest Card / Device " ,newstr])
print(table2)
row_num = len(df)
print(row_num)
#I pulled the data from the file according to the device type so that the server cards and temperatures were sorted, I found the max temp from here
dn = pd.read_csv("cards.csv", sep=';', index_col=["Device"], usecols = ['Device','Card','Temperature'])
sum = []
high = []
#print("max temp: ", dn["Temperature"].loc[dev_types[1]].max())
for x in range(total_devices): # total devices (according the file = 3 )
print("\n")
cardCount = 0 # counts the number of cards belonging to the device
count2 = 0 # Counts the number of cards with a temperature greater than 70
tempcount = 0
print(dev_types[x])
for y in range(row_num):
if dev_types[x] == df["Device"].loc[y]:
print(df["Temperature"].loc[y])
tempcount = tempcount + df["Temperature"].loc[y] # the sum of the temperatures of the cards(used when calculating the average)
cardCount = cardCount +1
if df["Temperature"].loc[y] >= 70:
count2 = count2 +1
maxT = dn["Temperature"].loc[dev_types[x]].max() #Finding the ones with the max temperature from the cards belonging to the server
avg = str(tempcount/cardCount)
print("avg",avg)
table.add_row([dev_types[x], cardCount, count2, maxT,avg ]) # I added the information to the "devices" table
print("num of cards" , cardCount)
print("high temp cards" , count2)
print("\n\n")
print("\n\n")
print(table)
htmlCode = table.get_html_string()
htmlCode2 = table2.get_html_string()
f= open('devices.html', 'w')
f.write("SUMMARY")
f.write(htmlCode2)
f.write("DEVICES")
f.write(htmlCode)
Whether or not the code is run in Docker doesn't matter.
Wrap all of that current logic (well, not the imports and so on) in a function, say, def process_cards().
Call that function forever, in a loop:
import logging
def process_cards():
table = PrettyTable()
...
def main():
logging.basicConfig()
while True:
try:
process_cards()
except Exception:
logging.exception("Failed processing")
time.sleep(15)
if __name__ == "__main__":
main()
As an aside, your data processing code can be vastly simplified:
import pandas as pd
from prettytable import PrettyTable
def get_summary_table(df):
summary_table = PrettyTable() # create a table for summary
total_devices = df["Device"].nunique()
hottest_card = df.loc[df["Temperature"].idxmax()]
hottest_device_desc = f"{hottest_card.Card}/{hottest_card.Device}"
summary_table.add_row(["Total Devices", total_devices])
summary_table.add_row(["Total Cards", len(df["Card"])])
summary_table.add_row(["Max Card Temperature", df["Temperature"].max()])
summary_table.add_row(["Hottest Card / Device ", hottest_device_desc])
return summary_table
def get_devices_table(df):
devices_table = PrettyTable(
[
"Device",
"Total # of Cards",
"High Temp. Cards #",
"Max Temperature",
"Avg. Temperature",
]
)
for device_name, group in df.groupby("Device"):
count = len(group)
avg_temp = group["Temperature"].mean()
max_temp = group["Temperature"].max()
high_count = group[group.Temperature >= 70]["Temperature"].count()
print(f"{device_name=} {avg_temp=} {max_temp=} {high_count=}")
devices_table.add_row([device_name, count, high_count, max_temp, avg_temp])
return devices_table
def do_processing(csv_file="cards.csv", html_file="devices.html"):
# df = pd.read_csv(csv_file, sep=';', usecols=['Device', 'Card', 'Temperature'])
# (Just some random example data)
df = pd.DataFrame({
"Device": [f"Device {1 + x // 3}" for x in range(10)],
"Card": [f"Card {x + 1}" for x in range(10)],
"Temperature": [59.3, 77.2, 48.5, 60.1, 77.2, 61.1, 77.4, 65.8, 71.2, 60.3],
})
summary_table = get_summary_table(df)
devices_table = get_devices_table(df)
with open(html_file, "w") as f:
f.write(
"<style>table, th, td {border: 1px solid black; border-collapse: collapse;}</style>"
)
f.write("SUMMARY")
f.write(summary_table.get_html_string(header=False))
f.write("DEVICES")
f.write(devices_table.get_html_string())
do_processing()
i have an example of repeat decorator for run your function every seconds or minutes ...
i hope this sample helps you
from typing import Optional, Callable, Awaitable
import asyncio
from functools import wraps
def repeat_every(*, seconds: float, wait_first: bool = False)-> Callable:
def decorator(function: Callable[[], Optional[Awaitable[None]]]):
is_coroutine = asyncio.iscoroutinefunction(function)
#wraps(function)
async def wrapped():
async def loop():
if wait_first:
await asyncio.sleep(seconds)
while True:
try:
if is_coroutine:
await function()
else:
await asyncio.run_in_threadpool(function)
except Exception as e:
raise e
await asyncio.sleep(seconds)
asyncio.create_task(loop())
return wrapped
print("Repeat every working well.")
return decorator
#repeat_every(seconds=2)
async def main():
print(2*2)
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
print('Async event loop already running.')
tsk = loop.create_task(main())
tsk.add_done_callback(
lambda t: print(f'Task done with result= {t.result()}'))
else:
print('Starting new event loop')
asyncio.run(main())
and there is an option that you can make an entrypoint which has cronjob
lista =
[{Identity: joe,
summary:[
{distance: 1, time:2, status: idle},
{distance:2, time:5, status: moving}],
{unit: imperial}]
I can pull the data easily and put in pandas. The issue is, if an identity has multiple instances of, say idle, it takes the last value, instead of summing together.
my code...
zdrivershours = {}
zdistance = {}
zstophours = {}
For driver in resp:
driverid[driver['AssetID']] = driver['AssetName']
for value in [driver['SegmentSummary']]:
for value in value:
if value['SegmentType'] == 'Motion':
zdriverhours[driver['AssetID']] = round(value['Time']/3600,2)
if value['SegmentType'] == 'Stop':
zstophours[driver['AssetID']] = round(value['IdleTime']/3600,2)
zdistance[driver['AssetID']] = value['Distance']
To obtain the summatory of distance for every driver replace:
zdistance[driver['AssetID']] = value['Distance']
by
if driver['AssetID'] in zdistance:
zdistance[driver['AssetID']] = zdistance[driver['AssetID']] + value['Distance']
else:
zdistance[driver['AssetID']] = value['Distance']
I have a function that prints OHLCV data for stock prices from a websocket. It works but I have to copy it for each variable (Var1 to Var14) to get each individual stock data. How would I automate this process given that I have list:
varlist = [var1, var2, var3...var14]
and my code is:
def process_messages_for_var1(msg):
if msg['e'] == 'error':
print(msg['m'])
# If message is a trade, print the OHLC data
else:
# Convert time into understandable structure
transactiontime = msg['k']['T'] / 1000
transactiontime = datetime.fromtimestamp(transactiontime).strftime('%d %b %Y %H:%M:%S')
# Process this message once websocket starts
print("{} - {} - Interval {} - Open: {} - Close: {} - High: {} - Low: {} - Volume: {}".
format(transactiontime,msg['s'],msg['k']['i'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v']))
# Also, put information into an array
kline_array_msg = "{},{},{},{},{},{}".format(
msg['k']['T'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v'])
# Insert at first position
kline_array_dct[var1].insert(0, kline_array_msg)
if (len(kline_array_dct[var1]) > window):
# Remove last message when res_array size is > of FIXED_SIZE
del kline_array_dct[var1][-1]
I'm hoping to get the following result (notice how function name also changes):
def process_messages_for_var2(msg):
if msg['e'] == 'error':
print(msg['m'])
# If message is a trade, print the OHLC data
else:
# Convert time into understandable structure
transactiontime = msg['k']['T'] / 1000
transactiontime = datetime.fromtimestamp(transactiontime).strftime('%d %b %Y %H:%M:%S')
# Process this message once websocket starts
print("{} - {} - Interval {} - Open: {} - Close: {} - High: {} - Low: {} - Volume: {}".
format(transactiontime,msg['s'],msg['k']['i'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v']))
# Also, put information into an array
kline_array_msg = "{},{},{},{},{},{}".format(
msg['k']['T'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v'])
# Insert at first position
kline_array_dct[var2].insert(0, kline_array_msg)
if (len(kline_array_dct[var2]) > window):
# Remove last message when res_array size is > of FIXED_SIZE
del kline_array_dct[var2][-1]
You can adjust the function so that it takes one of the vars as an argument. I.e.,
def process_messages(msg, var):
...
kline_array_dct[var].insert(0, kline_array_msg)
if (len(kline_array_dct[var]) > window):
# Remove last message when res_array size is > of FIXED_SIZE
del kline_array_dct[var][-1]
If the processes are generally the same, just define one of them, and give it more arguments:
def process_messages(msg, var)
Then, you can adjust your process code to run through each var when you call it. You can do this by removing the numbered vars in the process code:
if msg['e'] == 'error':
print(msg['m'])
# If message is a trade, print the OHLC data
else:
# Convert time into understandable structure
transactiontime = msg['k']['T'] / 1000
transactiontime = datetime.fromtimestamp(transactiontime).strftime('%d %b %Y %H:%M:%S')
# Process this message once websocket starts
print("{} - {} - Interval {} - Open: {} - Close: {} - High: {} - Low: {} - Volume: {}".
format(transactiontime,msg['s'],msg['k']['i'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v']))
# Also, put information into an array
kline_array_msg = "{},{},{},{},{},{}".format(
msg['k']['T'],msg['k']['o'],msg['k']['c'],msg['k']['h'],msg['k']['l'],msg['k']['v'])
# Insert at first position
kline_array_dct[var].insert(0, kline_array_msg)
if (len(kline_array_dct[var]) > window):
# Remove last message when res_array size is > of FIXED_SIZE
del kline_array_dct[var][-1]
Then, create a simple for loop to call the process for each var in the list:
for var in varList:
process_messages("msg", var)
The for loop will call the process for each var in the list.
I have been trying to use a list in Python. I call it once to store a value
inside of one loop. I then call it again inside of another loop, but by the
time I add something to the list, it has overwritten my old entries with the
new ones, before I ever actually add the new one. It may be that I do not fully
understand Python, but I will post a simple version of the code below, then the
whole version:
ret=[]
plan=argument
for i in range(x):
plan.changeY
ret.append(plan)
plan=argument
for i in range(Z):
plan.changeD
ret.append(plan)
plan=argument
The problem arises before I get to the second append: all the values of the
first append are altered. The code for it is below.
global PLANCOUNTER
global VARNUM
ret=[]
ret=[]
plan = node.state
keep = plan
if printPlans:
print "fringe[0].state:",plan.id, "---"
printstate(plan)
if node.parent:
print "parent: plan",node.parent.state.id
if len(plan.openconds)>0:
print plan.openconds[0],"is the condition being resolved\n"
openStep = plan.openconds[0][1]#access the step
openStep1=openStep
openCond = plan.openconds[0][0]
plan.openconds = plan.openconds[1:]
keep=plan
if(len(plan.steps)>1):#has init and goal!
########################
#NOT GETTING RID OF THE OPENCOND, SO ASTAR NEVER TAKING IT
#######################
if openStep!="init" and openStep!="goal":
val = openStep.index("*")
openStep=openStep[:val]
numPreConds=len(preconds[openStep])
numStep=len(plan.steps)
for i in plan.steps:
i2=i
plan = keep
if i!="init" and i!="goal":
i=i[:i.index("*")]
if i !="goal" and i!=openStep:
for j in adds[i]:
bool=0
if j==openCond:
plan.causallinks.append((i2,openCond,openStep1))#problem
plan.ordercons.append((i2,openStep1))
PLANCOUNTER+=1
plan.id=PLANCOUNTER
#threats
bol=0
for t in plan.steps:#all steps
t2=t
if t!="init" and t!="goal":
val = t.index("*")
t=t[:val]
for k in deletes[t]:
if k == openCond:
for b in plan.ordercons:
if b ==(t,i):
bol=1
if bol==0 and t!=i:
for v in plan.threats:
if v[0]==(i2,openCond,openStep1) and v[1]==t2:
bol=1
if bol==0 and t!=i and i2!="init":
plan.threats.append(((i2,openCond,openStep1),t2))
else:
bol=0
ret.append(plan)
print len(plan.openconds)+len(plan.threats)," upper\n"
plan=keep
meh=ret
counter=0
arr={}
for k in ret:
print len(k.openconds)+len(k.threats)," ", k.id,"middle"
key = counter
arr[counter]=k
print arr[counter].id
counter+=1
for i in adds:#too many conditions
stepCons = i
plan2 = keep
if i!="goal" and i!="init" and i!=openStep:
for j in adds[i]:
if j==openCond:
nextStep=i
st = str(i)+"*"+str(VARNUM)
VARNUM+=1
plan2.steps.append(st)
plan2.ordercons.append(("init",st))
plan2.ordercons.append((st, "goal"))
plan2.ordercons.append((st,openStep1))
plan2.causallinks.append((st,openCond,openStep1))
##################################################
for k in preconds[i]:#issue is htereeeeeeeee
plan2.openconds.append((k,st))
for k in meh:
print len(k.openconds)+len(k.threats)," ", k.id,"middle2s"
PLANCOUNTER+=1
plan2.id=PLANCOUNTER
#threats
cnt=0
for tr in range(len(arr)):
print len(arr[cnt].openconds)+len(arr[cnt].threats)," ", arr[cnt].id,"middlearr"
cnt+=1
bol=0
for t in plan2.steps:#all steps
t2=t
if t!="init" and t!="goal":
val = t.index("*")
t=t[:val]
for k in deletes[t]:#check their delete list
if k == openCond:#if our condition is on our delete lise
for b in plan2.ordercons:
if b ==(t,i):# and it is not ordered before it
bol=1
if bol==0 and t!=i:
for v in plan2.threats:
if v[0]==(i2,openCond,openStep1) and v[1]==t2:
bol=1
if bol==0 and t!=i and st!="init":
plan2.threats.append(((st,openCond,openStep1),t2))
else:
bol=0
#and y is not before C
#causal link, threatening step
for k in ret:
print len(k.openconds)+len(k.threats)," ", k.id,"middle3"
ret.append(plan2)
print len(plan2.openconds)+len(plan2.threats)," ",plan2.id," lower\n"
elif len(plan.threats)>0:
#keep=plan
openThreatProducer = plan.threats[0][0][0]#access the step
openThreat=plan.threats[0][1]
plan.threats=plan.threats[1:]
print openThreatProducer, " ", openThreat
i=0
while i<2:
plan = keep
if i==0:
bool=0
for k in plan.ordercons:
if (k[0]==openThreat and k[1]==openThreatProducer) or (k[1]==openThreat and k[0]==openThreatProducer):
bool=1
if bool==0:
plan.ordercons.append((openThreatProducer,openThreat))
elif i==1:
bool=0
for k in plan.ordercons:
if (k[0]==openThreat and k[1]==openThreatProducer) or (k[1]==openThreat and k[0]==openThreatProducer):
bool=1
if bool==0:
plan.ordercons.append((openThreat,openThreatProducer))
ret.append(plan)
i+=1
t=len(ret)
for k in ret:
print len(k.openconds)+len(k.threats)," ", k.id,"lowest"
print t
return ret
It seems like after doing plan=argument, plan and argument are pointing to same location. you should do something like this
import copy
plan = copy.deepcopy(argument)
This would create a exact copy of argument.
I have a for loop which references a dictionary and prints out the value associated with the key. Code is below:
for i in data:
if i in dict:
print dict[i],
How would i format the output so a new line is created every 60 characters? and with the character count along the side for example:
0001
MRQLLLISDLDNTWVGDQQALEHLQEYLGDRRGNFYLAYATGRSYHSARELQKQVGLMEP
0061
DYWLTAVGSEIYHPEGLDQHWADYLSEHWQRDILQAIADGFEALKPQSPLEQNPWKISYH
0121 LDPQACPTVIDQLTEMLKETGIPVQVIFSSGKDVDLLPQRSNKGNATQYLQQHLAMEPSQ
It's a finicky formatting problem, but I think the following code:
import sys
class EveryN(object):
def __init__(self, n, outs):
self.n = n # chars/line
self.outs = outs # output stream
self.numo = 1 # next tag to write
self.tll = 0 # tot chars on this line
def write(self, s):
while True:
if self.tll == 0: # start of line: emit tag
self.outs.write('%4.4d ' % self.numo)
self.numo += self.n
# wite up to N chars/line, no more
numw = min(len(s), self.n - self.tll)
self.outs.write(s[:numw])
self.tll += numw
if self.tll >= self.n:
self.tll = 0
self.outs.write('\n')
s = s[numw:]
if not s: break
if __name__ == '__main__':
sys.stdout = EveryN(60, sys.stdout)
for i, a in enumerate('abcdefgh'):
print a*(5+ i*5),
shows how to do it -- the output when running for demonstration purposes as the main script (five a's, ten b's, etc, with spaces in-between) is:
0001 aaaaa bbbbbbbbbb ccccccccccccccc dddddddddddddddddddd eeeeee
0061 eeeeeeeeeeeeeeeeeee ffffffffffffffffffffffffffffff ggggggggg
0121 gggggggggggggggggggggggggg hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
0181 hhhhhhh
# test data
data = range(10)
the_dict = dict((i, str(i)*200) for i in range( 10 ))
# your loops as a generator
lines = ( the_dict[i] for i in data if i in the_dict )
def format( line ):
def splitter():
k = 0
while True:
r = line[k:k+60] # take a 60 char block
if r: # if there are any chars left
yield "%04d %s" % (k+1, r) # format them
else:
break
k += 60
return '\n'.join(splitter()) # join all the numbered blocks
for line in lines:
print format(line)
I haven't tested it on actual data, but I believe the code below would do the job. It first builds up the whole string, then outputs it a 60-character line at a time. It uses the three-argument version of range() to count by 60.
s = ''.join(dict[i] for i in data if i in dict)
for i in range(0, len(s), 60):
print '%04d %s' % (i+1, s[i:i+60])
It seems like you're looking for textwrap
The textwrap module provides two convenience functions, wrap() and
fill(), as well as TextWrapper, the class that does all the work, and
a utility function dedent(). If you’re just wrapping or filling one or
two text strings, the convenience functions should be good enough;
otherwise, you should use an instance of TextWrapper for efficiency.