I created a function that checking if a path is a directory.
The path is depend on a filed in class named "wafer_num".
The function returns me false, because the path it's getting is:
/usr/local/insight/results/images/toolsDB/lauto_ptest_s2022-08-09/w<class 'int'>
instead of: /usr/local/insight/results/images/toolsDB/lauto_ptest_s2022-08-09/w+int number
the whole path is constant until the int number that attached to the 'w' char in path.
for example: if wafer_num filed in class is: 2, than the path will be:
/usr/local/insight/results/images/toolsDB/lauto_ptest_s2022-08-09/w2
I'm initializing the class from another file. the initialization is working well because I have many other functions in my code that depends on this filed.
I'm attaching the function that making me troubles and the class too. many thaks:
# This is the other file that call a constructive function in my code:
import NR_success_check_BBs as check_BBs
# list of implemented recipes:
sanity = 'sanity_' + time_now + '_Ver_5.1'
R2M_delayer = 'R2M_E2E_delayer_NR_Ver_5.1'
R2M_cross_section = 'R2M_E2E_NR_Ver_5.1'
# ***********************************************************************************************
# ***********************************************************************************************
# the function set_recipe_name will create an object that contain all the data of that recipe
check_BBs.set_recipe_name(R2M_delayer)
# this function contain all the necessary functions to check the object.
check_BBs.success_check()
# EOF
# -------------------------------------------------------
# This is the code file [with constructive function attached] :
import datetime
import logging
import os.path
from pathlib import Path
import time
time_now = str(datetime.datetime.now()).split()[0]
log_file_name_to_write = 'test.log'
log_file_name_to_read = 'NR_' + time_now + '.log'
logging.basicConfig(level=logging.INFO, filename=log_file_name_to_write, filemode='a',format='%(asctime)s - %(levelname)s - %(message)s')
R2M_RecipeRun_path = '/usr/local/disk2/unix_nt/R2M/RecipeRun/'
R2M_dir = '/usr/local/disk2/unix_nt/R2M'
metro_callback_format = 'Metro_Reveal_'
class recipe:
name = ''
script_name = ''
wafer_num = int
images = 0
metrology_images = 0
images_output_dir_path = '/usr/local/insight/results/images/toolsDB/lauto_ptest_s/' + time_now + 'w' + str(recipe.wafer_num)
def set_recipe_name(name):
recipe.name = name
logging.info("Analyzing the recipe: " + recipe.name)
if recipe.name == 'sanity_' + time_now + '_Ver_5.1':
recipe.script_name = 'Sanity_CS.py'
recipe.wafer_num = 1
elif recipe.name == 'R2M_E2E_NR_Ver_5.1':
recipe.script_name = 'R2M.py'
recipe.wafer_num = 2
elif recipe.name == 'R2M_E2E_delayer_NR_Ver_5.1':
recipe.script_name = 'R2M_delayer.py'
recipe.wafer_num = 3
# ***********************************************************************************************
# ***********************************************************************************************
# This is the function that makes my trouble:
def is_results_images_directory_exist():
images_directory_path = Path('/usr/local/insight/results/images/toolsDB/lauto_ptest_s' + time_now + '/w' + wafer_num)
print(images_directory_path)
is_directory = os.path.isdir(images_directory_path)
print(is_directory)
if not is_directory:
logging.error("There is no images directory for the current recipe at results")
return False
return True
Related
Here is the code i am not sure what i have to do to loop it a certain amount of times
#import stuff
import os
import random
import string
#generate ran str
letters = string.ascii_lowercase
strgen = ( ''.join(random.choice(letters) for i in range(10)) )
#creating file and saving into dir
filepath = os.path.join('c:/files/' + strgen + '.txt')
if not os.path.exists('c:/files'):
os.makedirs('c:/files')
f = open(filepath, "w+")
not sure if I got it right but do you mean something like below?
count = 0
while count < 10:
letters = string.ascii_lowercase
strgen = ( ''.join(random.choice(letters) for i in range(10)) )
#creating file and saving into dir
filepath = os.path.join('c:/files/' + strgen + '.txt')
if not os.path.exists('c:/files'):
os.makedirs('c:/files')
f = open(filepath, "w+")
count = count + 1
Whatever action you want to perform place inside action function
start action after particular time you want to execute this function
stop interval for stop executing
No of times executing=Stop interval/set interval
import time, threading
StartTime=time.time()
def action() :
print('action ! -> time : {:.1f}s'.format(time.time()-StartTime))
class setInterval :
def __init__(self,interval,action) :
self.interval=interval
self.action=action
self.stopEvent=threading.Event()
thread=threading.Thread(target=self.__setInterval)
thread.start()
def __setInterval(self) :
nextTime=time.time()+self.interval
while not self.stopEvent.wait(nextTime-time.time()) :
nextTime+=self.interval
self.action()
def cancel(self) :
self.stopEvent.set()
# start action every 1s
inter=setInterval(0.6,action)
print('just after setInterval -> time : {:.1f}s'.format(time.time()-StartTime))
# will stop interval in 5s
t=threading.Timer(5,inter.cancel)
t.start()
An inefficient version of what I'm trying to do is this:
while(True):
dynamic_variable = http(request) # where http request is subject to change values
method(dynamic_variable)
Where method(dynamic_variable isn't guaranteed to finish, and if http(request) returns a different value than method(dynamic_variable) becomes a useless function.
It seems like I should be able to change dynamic_variable more efficiently by having it "automatically" update whenever http(request) changes value.
I think what I want to do is called the "observer pattern" but I'm not quite fluent enough in code to know if that's the correct pattern or how to implement it.
A simple example would be much appreciated!
Edit:
from web3 import Web3
import json
from hexbytes import HexBytes
import numpy as np
import os
import time
INFURA_ROPSTEN_URL = "https://ropsten.infura.io/v3/<api_key>"
# metamask account information
PUBLIC_KEY = "0x3FaD9AccC3A39aDbd9887E82F94602cEA6c7F86f"
PRIVATE_KEY = "myprivatekey"
UNITS_ADDRESS = "units_address"
# from truffle build. For ABI
JSON_PATH = "../truffle/build/contracts/Units.json"
def set_up_web3():
web3 = Web3(Web3.HTTPProvider(INFURA_ROPSTEN_URL))
web3.eth.defaultAccount = PUBLIC_KEY
return web3
def get_units_contract_object():
with open(JSON_PATH, 'r') as json_file:
abi = json.load(json_file)['abi']
return web3.eth.contract(address=UNITS_ADDRESS,abi=abi)
def solve(web3, units):
nonce = np.random.randint(0,1e10)
while True:
challenge_number_hex = HexBytes(units.functions.getChallengeNumber().call()).hex()
my_digest_hex = web3.solidityKeccak(
['bytes32','address','uint256'],
[challenge_number_hex, PUBLIC_KEY, nonce]).hex()
my_digest_number = int(my_digest_hex,0)
target = units.functions.getMiningTarget().call()
if my_digest_number < target:
return (nonce, my_digest_hex)
else:
nonce += 1
def build_transaction(units, nonce, digest_hex, txn_count):
return units.functions.mint(
nonce,
digest_hex
).buildTransaction({
"nonce" : txn_count,
})
if __name__ == "__main__":
web3 = set_up_web3()
txn_count = web3.eth.getTransactionCount(PUBLIC_KEY)
units = get_units_contract_object()
_cycle_goal = 20
_prev_finish = time.time()
_wait_time = 0
while True:
target = units.functions.getMiningTarget().call()
nonce, digest_hex = solve(web3, units)
mint_txn = build_transaction(units, nonce,digest_hex, txn_count)
signed_txn = web3.eth.account.sign_transaction(mint_txn, private_key=PRIVATE_KEY)
txn_address = web3.eth.sendRawTransaction(signed_txn.rawTransaction)
txn_count += 1
print(f"Solution found! nonce={nonce}, digest_hex={digest_hex}")
_finished = time.time()
_elapsed = _finished - _prev_finish
_additional_wait = _cycle_goal - _elapsed
_wait_time += _additional_wait
print(f"Waiting {_wait_time}")
_prev_finish = _finished
time.sleep(_wait_time)
challenge_hex_number is the variable I want to update
I'd like to change the log's directory depending on the time (specifically the hour) when new log is created.
For example, let's say the message aaa is saved in a log file in directory d:\log\191105\09\log.log at 09:59 during running a program.
As the time passes, new log bbb is saved in a log file but the directory should be different, d:\log\191105\10.log at 10:00 without terminating the program.
My log manager code is following,
import logging
import datetime
import psutil
import os, os.path
class LogManager:
today = datetime.datetime.now()
process_name = psutil.Process().name()
process_id = str(psutil.Process().pid)
log_dir = "D:/LOG/" + today.strftime("%Y%m%d") + "/" + today.strftime("%H") + "/"
log_filename = process_name + '_[' + process_id + ']_' + today.strftime("%Y-%m-%d_%H") + '.log'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# date/directory config
log = logging.getLogger('mypython')
log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s||%(levelname)s||%(message)s')
fileHandler = logging.FileHandler(log_dir+log_filename)
fileHandler.setFormatter(formatter)
log.addHandler(fileHandler)
def update(self):
self.today = datetime.datetime.now()
old_log_dir = self.log_dir
self.log_dir = "D:/LOG/" + self.today.strftime("%Y%m%d") + "/" + self.today.strftime("%H") + "/"
if (old_log_dir == self.log_dir):
return
self.log_filename = self.process_name + '_[' + self.process_id + ']_' + self.today.strftime("%Y-%m-%d_%H") + '.log'
if not os.path.exists(self.log_dir):
os.makedirs(self.log_dir)
self.log = logging.getLogger('mypython')
# here, i wanna update my filehandler
for hdlr in self.log.handlers[:]:
if isinstance(hdlr, self.log.FileHander):
self.log.removeHandler(hdlr)
self.fileHandler = logging.FileHandler(self.log_dir+self.log_filename)
self.fileHandler.setFormatter(self.formatter)
self.log.addHandler(self.fileHandler)
def i(self, ex):
self.update()
self.log.info(ex)
def w(self, ex):
self.update()
self.log.warning(ex)
and I call these functions like
import LogManager as lm
logManager = lm.LogManager()
logManager.i('message')
It works well, but seems like it does not update its fileHandler after passing the hour.
I tried to find if the log has updateHandler method... but it doesn't .
What should I do??
You can do this much easier by using what is already in the logging library. Specifically there is a TimedRotatingFileHandler for which you only need to change how it names the files it creates. I've created a working minimal demo for you which changes the folder that logs get saved into on every second.
Edit: change to rollover on every full hour instead of every second as per comment below
import os
import logging
from time import sleep, strftime
from logging.handlers import TimedRotatingFileHandler
def namer(default_name):
head, tail = os.path.split(default_name)
folder_name = 'logs'+strftime('%H%M%S')
folder = os.path.join(head, folder_name)
try:
os.mkdir(folder)
except:
# folder already exists
pass
return os.path.join(folder, tail)
logger = logging.getLogger()
handler = TimedRotatingFileHandler('base.log', when='H')
handler.namer = namer
# set the rollover time to the next full hour
handler.rolloverAt = datetime.now().replace(microsecond=0,second=0,minute=0).timestamp() + 60*60
logger.addHandler(handler)
logger.warning('test1')
sleep(2)
logger.warning('test2')
I've solved the problem. The problem is related my %PATH%
I have a script which work with an argument. In powershell I've tried the command you can see below;
.\dsrf2csv.py C:\Python27\a\DSR_testdata.tsv.gz
And also you can see the script below,
def __init__(self, dsrf2csv_arg):
self.dsrf_filename = dsrf2csv_arg
dsrf_path, filename = os.path.split(self.dsrf_filename)
self.report_outfilename = os.path.join(dsrf_path, filename.replace('DSR', 'Report').replace('tsv', 'csv'))
self.summary_outfilename = os.path.join(dsrf_path, filename.replace('DSR', 'Summary').replace('tsv.gz', 'csv'))
But when I try to run this script there is no any action. How should I run this script with a file? (example : testdata.tsv.gz)
Note : Script and file in same location.
Full Scritp;
import argparse
import atexit
import collections
import csv
import gzip
import os
SKIP_ROWS = ['HEAD', '#HEAD', '#SY02', '#SY03', '#AS01', '#MW01', '#RU01',
'#SU03', '#LI01', '#FOOT']
REPORT_HEAD = ['Asset_ID', 'Asset_Title', 'Asset_Artist', 'Asset_ISRC',
'MW_Asset_ID', 'MW_Title', 'MW_ISWC', 'MW_Custom_ID',
'MW_Writers', 'Views', 'Owner_name', 'Ownership_Claim',
'Gross_Revenue', 'Amount_Payable', 'Video_IDs', 'Video_views']
SUMMARY_HEAD = ['SummaryRecordId', 'DistributionChannel',
'DistributionChannelDPID', 'CommercialModel', 'UseType',
'Territory', 'ServiceDescription', 'Usages', 'Users',
'Currency', 'NetRevenue', 'RightsController',
'RightsControllerPartyId', 'AllocatedUsages', 'AmountPayable',
'AllocatedNetRevenue']
class DsrfConverter(object):
"""Converts DSRF 3.0 to YouTube CSV."""
def __init__(self, dsrf2csv_arg):
""" Creating output file names """
self.dsrf_filename = dsrf2csv_arg
dsrf_path, filename = os.path.split(self.dsrf_filename)
print(dsrf_filename)
input("Press Enter to continue...")
self.report_outfilename = os.path.join(dsrf_path, filename.replace(
'DSR', 'Report').replace('tsv', 'csv'))
self.summary_outfilename = os.path.join(dsrf_path, filename.replace(
'DSR', 'Summary').replace('tsv.gz', 'csv'))
def parse_blocks(self, reader):
"""Generator for parsing all the blocks from the file.
Args:
reader: the handler of the input file
Yields:
block_lines: A full block as a list of rows.
"""
block_lines = []
current_block = None
for line in reader:
if line[0] in SKIP_ROWS:
continue
# Exit condition
if line[0] == 'FOOT':
yield block_lines
raise StopIteration()
line_block_number = int(line[1])
if current_block is None:
# Initialize
current_block = line_block_number
if line_block_number > current_block:
# End of block, yield and build a new one
yield block_lines
block_lines = []
current_block = line_block_number
block_lines.append(line)
# Also return last block
yield block_lines
def process_single_block(self, block):
"""Handles a single block in the DSR report.
Args:
block: Block as a list of lines.
Returns:
(summary_rows, report_row) tuple.
"""
views = 0
gross_revenue = 0
summary_rows = []
owners_data = {}
# Create an ordered dictionary with a key for every column.
report_row_dict = collections.OrderedDict(
[(column_name.lower(), '') for column_name in REPORT_HEAD])
for line in block:
if line[0] == 'SY02': # Save the financial Summary
summary_rows.append(line[1:])
continue
if line[0] == 'AS01': # Sound Recording information
report_row_dict['asset_id'] = line[3]
report_row_dict['asset_title'] = line[5]
report_row_dict['asset_artist'] = line[7]
report_row_dict['asset_isrc'] = line[4]
if line[0] == 'MW01': # Composition information
report_row_dict['mw_asset_id'] = line[2]
report_row_dict['mw_title'] = line[4]
report_row_dict['mw_iswc'] = line[3]
report_row_dict['mw_writers'] = line[6]
if line[0] == 'RU01': # Video level information
report_row_dict['video_ids'] = line[3]
report_row_dict['video_views'] = line[4]
if line[0] == 'SU03': # Usage data of Sound Recording Asset
# Summing up views and revenues for each sub-period
views += int(line[5])
gross_revenue += float(line[6])
report_row_dict['views'] = views
report_row_dict['gross_revenue'] = gross_revenue
if line[0] == 'LI01': # Ownership information
# if we already have parsed a LI01 line with that owner
if line[3] in owners_data:
# keep only the latest ownership
owners_data[line[3]]['ownership'] = line[6]
owners_data[line[3]]['amount_payable'] += float(line[9])
else:
# need to create the entry for that owner
data_dict = {'custom_id': line[5],
'ownership': line[6],
'amount_payable': float(line[9])}
owners_data[line[3]] = data_dict
# get rid of owners which do not have an ownership or an amount payable
owners_to_write = [o for o in owners_data
if (owners_data[o]['ownership'] > 0
and owners_data[o]['amount_payable'] > 0)]
report_row_dict['owner_name'] = '|'.join(owners_to_write)
report_row_dict['mw_custom_id'] = '|'.join([owners_data[o]
['custom_id']
for o in owners_to_write])
report_row_dict['ownership_claim'] = '|'.join([owners_data[o]
['ownership']
for o in owners_to_write])
report_row_dict['amount_payable'] = '|'.join([str(owners_data[o]
['amount_payable'])
for o in owners_to_write])
# Sanity check. The number of values must match the number of columns.
assert len(report_row_dict) == len(REPORT_HEAD), 'Row is wrong size :/'
return summary_rows, report_row_dict
def run(self):
finished = False
def removeFiles():
if not finished:
os.unlink(self.report_outfilename)
os.unlink(self.summary_outfilename)
atexit.register(removeFiles)
with gzip.open(self.dsrf_filename, 'rb') as dsr_file, gzip.open(
self.report_outfilename, 'wb') as report_file, open(
self.summary_outfilename, 'wb') as summary_file:
dsr_reader = csv.reader(dsr_file, delimiter='\t')
report_writer = csv.writer(report_file)
summary_writer = csv.writer(summary_file)
report_writer.writerow(REPORT_HEAD)
summary_writer.writerow(SUMMARY_HEAD)
for block in self.parse_blocks(dsr_reader):
summary_rows, report_row = self.process_single_block(block)
report_writer.writerow(report_row.values())
summary_writer.writerows(summary_rows)
finished = True
if __name__ == '__main__':
arg_parser = argparse.ArgumentParser(
description='Converts DDEX DSRF UGC profile reports to Standard CSV.')
required_args = arg_parser.add_argument_group('Required arguments')
required_args.add_argument('dsrf2csv_arg', type=str)
args = arg_parser.parse_args()
dsrf_converter = DsrfConverter(args.dsrf2csv_arg)
dsrf_converter.run()
In general to execute a python script in powershell like this .\script.py has two requirements:
Add the path to the python binaries to your %path%: $env:Path = $env:Path + ";C:\Path\to\python\binaries\"
Add the ending .py to the pathtext environment variable: $env:PATHEXT += ";.PY"
The latter will only be used in the current powershell session. If you want to add it to all future powershell sessions, add this line to your powershell profile (f.e. notepad $profile).
In your case there is also an issue with the python script you are trying to excute. def __init__(self) is an constructor for a class, like:
class Foo:
def __init__(self):
print "foo"
Did you give us your complete script?
I'd like to transform all occurrences of "some_func(a, b)" in a Python module to "assert a == b" via Python's standard lib's lib2to3. I wrote a script that would take source as input:
# convert.py
# convert assert_equal(a, b) to assert a == b
from lib2to3 import refactor
refac = refactor.RefactoringTool(['fix_assert_equal'])
result = refac.refactor_string('assert_equal(123, 456)\n', 'assert equal')
print(result)
and the actual fixer in a separate module:
# fix_assert_equal.py, in same folder as convert.py
from lib2to3 import fixer_base, pygram, pytree, pgen2
import ast
import logging
grammar = pygram.python_grammar
logger = logging.getLogger("RefactoringTool")
driver = pgen2.driver.Driver(grammar, convert=pytree.convert, logger=logger)
dest_tree = driver.parse_string('assert a == b\n')
class FixAssertEqual(fixer_base.BaseFix):
BM_compatible = True
PATTERN = """
power< 'assert_equal'
trailer<
'('
arglist<
obj1=any ','
obj2=any
>
')'
>
>
"""
def transform(self, node, results):
assert results
obj1 = results["obj1"]
obj1 = obj1.clone()
obj1.prefix = ""
obj2 = results["obj2"]
obj2 = obj2.clone()
obj2.prefix = ""
prefix = node.prefix
dest_tree2 = dest_tree.clone()
node = dest_tree2.children[0].children[0].children[1]
node.children[0] = obj1
node.children[2] = obj2
dest_tree2.prefix = prefix
return dest_tree2
However this produces the output assert123 ==456 instead of assert 123 == 456. Any idea how to fix this?
I found the solution: the obj2.prefix should not be set to "", this removes the space before the object.