SRate = [5,5,5]
Bots = 120
NQueue = 3
TSim = 100
Exp = 2
DDistance = 1
Lambda = 40 # 120/3 = 40
import random
Elist=[]
AvgSRate = 5
def Initilization(AvgSRate,Lambda,Exp):
return float((Lambda/(AvgSRate**Exp))*(1+(1/10)*(2*(np.random.seed(-28)) - 1)))
for a in range(1,361):
Elist.append(Initilization)
I am trying to produce a set of randomly generated numbers between 0 and 1, so decimals for the initialization of a simulation, however It spits out the same values every time the loop is executed, so when I print Elist I get a list of all the same values. Also the list has <Function Initialization at "the number">, could someone help me eliminate that portion from the printed list to only have the numbers.
The issue is np.random.seed(-28) is just seeding the random generator [Documentation] , it does not return any random numbers , for getting a random number use - np.random.rand() .
Example -
return float((Lambda/(AvgSRate**Exp))*(1+(1/10)*(2*(np.random.rand()) - 1)))
If you want to seed the random generator, you can call it before calling the for loop that calls the Initilization function.
The function after this should look like -
def Initilization(AvgSRate,Lambda,Exp):
return float((Lambda/(AvgSRate**Exp))*(1+(1/10)*(2*(np.random.rand()) - 1)))
Also, one more issue, is that you are not adding the value returned from the function call to the list - Elist.append(Initilization) , you are just adding the function reference.
you need to change that to Elist.append(Initilization(<values for the parameter>)) to call the function and append the returned value into EList .
Related
I'm implementing the following code to get match history data from an API:
my_matches = watcher.match.matchlist_by_puuid(
region=my_region,
puuid=me["puuid"],
count=100,
start=1)
The max items I can display per page is 100 (count), I would like my_matches to equal the first 1000 matches, thus looping start from 1 - 10.
Is there any way to effectively do this?
Based on the documentation (see page 17), this function returns a list of strings. The function can only return a 100 count max. Also, it accepts a start for where to start returning these matches (which defaults at 0). A possible solution for your problem would look like this:
allMatches = [] # will become a list containing 10 lists of matches
for match_page in range(9): # remember arrays start at 0!
countNum = match_page * 100 # first will be 0, second 100, third 200 etc...
my_matches = watcher.match.matchlist_by_puuid(
region=my_region,
puuid=me["puuid"],
count=100,
start=countNum)
# ^ Notice how we use countNum as the start for returning
allMatches.append(my_matches)
If you want to remain concise, and you want your matchesto be a 1000 long list of results, you can concatenate direclty all the outputs of size 100 as:
import itertools
matches = list(itertools.chain.from_iterable(watcher.match.matchlist_by_puuid(
region=my_region,
puuid=me["puuid"],
count=100,
start=i*100) for i in range(10)))
Essentially, I have a list of hashes. I need to modify each one slightly by inserting a section of the hash in front of it. A bit random, I admit, but all of the main code works fine.
import hashlib
bytes_to_hash = b"this is an interesting bytes object"
cycles = 65
digest_length = 10000
hash_list = [
hashlib.shake_256(bytes_to_hash).digest(digest_length),
hashlib.shake_256(bytes_to_hash + b"123").digest(digest_length)
]
for _ in range(cycles):
number = 0 #will be used to generate the next hash later on
for x in range(4):
selection = int(hashlib.sha256(hash_list[-1] + str(x).encode()).hexdigest(), 16) % digest_length
#^ used to pseudorandomly select a number
for count, hash in enumerate(hash_list):
number += hash[selection]
prev_hash = hash_list[count - 1]
prev_hash = prev_hash[:selection - 512] + hash[selection - 512 : selection] + prev_hash[selection:]
#hash_list[count - 1] = prev_hash
#^ this is the issue
number = number % digest_length
selection_range = hash_list[-1][number - 128 : number]
hash_list.append(hashlib.shake_256(selection_range).digest(digest_length))
The problem comes when I attempt to actually modify the list itself. This simple line screws everything up -
hash_list[count - 1] = prev_hash
After adding this line, the script suddenly takes a very long time of to execute, up from about 75 ms to over 100 seconds. A sanity check of removing that line and inserting print(len(prev_hash)) proved that the length of prev_hash is always 10,000, exactly what I expected. It is only when I attempt to assign prev_hash to a list index that the length reaches as high as 19,000,000 for no apparent reason. The weirdest part is that after reintroducing that line, it seems that the length of prev_hash itself is what's actually expanding, not the value in the list.
I have no idea what I'm doing wrong, any help is appreciated.
I'm optimizing my code that is part of a larger environment with hundreds of classes etc. So I extracted the problematic method and created a simple script in order to measure the performance of this method.
I found that I should optimize the method by simply returning False from is_intersect() method instead of calculating then returning. By doing that, the execution time of my program went from 25s to 12s.
I was surprised to see that when I created the script and called the method alone 1M times, the execution time was only ~1s. My original program has a loop with ~2.5k iterations for this example. So I'm profiling with 400 times the number of the original loop and it's taking 11s less to execute?
Benchmark:
original program:
n of loops: 2.5k
execution time=25s
execution time after replacing the problematic function by a constant = 12s
n of loops: 10k
execution time=21m30s
execution time after replacing the problematic function by a constant = 46s
external script (single file) shown below:
n of loops: 10k
execution time: <1s
n of loops: 1M
execution time: ~1s
Original program:
I modified the original names and inserted some comments that may help to understand what each method does. For this example the functionality is very simple:
The program is scanning an image of the sky with step X and step Y. If it finds a star that intersects with another object (space monkey if you wish). Not the best example but I think it may help :)
def my_original_method(self):
for pos_xy_key, pos_xy_tuple in self.pos_xy.items(): # size of pos_xy is 2.5k items with tuples (float*8 items)
# e.g. poly1_xmin_xmax = [125, 500]
# poly1_ymin_ymax = [60, 600]
# These values change on each iteration, they never repeat
poly1_xmin_xmax, poly1_ymin_ymax = modules.utils.polygon_from_pos_xy(pos_xy_tuple)
for name in self.stars: # self.stars is a dict() with 1k items
parent_name = self.stars[name]['parent']
if name in self.set_with_names: # size of set is around 20k items
# e.g. poly2_xmin_xmax = [1, 2]
# poly2_ymin_ymax = [1, 2]
# These values change on each iteration, they may repeat
# return the absolute value of the star in space
poly2_xmin_xmax, poly2_ymin_ymax = self.polygon_star(name, parent_name)
# if False: # this reduces the execution time from 25s to 12s
if modules.utils.is_intersect(polygon_particle, polygon_cell):
self.results[name] = True
Script for the problematic method
class Test:
def __init__(self):
print('hello')
def is_intersect(self, poly1_xmin_xmax, poly1_ymin_ymax, poly2_xmin_xmax, poly2_ymin_ymax):
if min(poly1_xmin_xmax) > max(poly2_xmin_xmax) or max(poly1_xmin_xmax) < min(poly2_xmin_xmax):
return False
if min(poly1_ymin_ymax) > max(poly2_ymin_ymax) or max(poly1_ymin_ymax) < min(poly2_ymin_ymax):
return False
return True
obj_test = Test()
poly1_xmin_xmax = [125, 500]
poly1_ymin_ymax = [60, 600]
temp_dct = dict()
for i in range(1000000):
poly1_xmin_xmax = [125, 5000]
poly1_ymin_ymax = [60, 2000]
poly2_xmin_xmax = [i+1, i+1]
poly2_ymin_ymax = [61, 500]
if obj_test.is_intersect(poly1_xmin_xmax, poly1_ymin_ymax, poly2_xmin_xmax, poly2_ymin_ymax):
temp_dct[i] = True
I thought the problem had something to do with function overhead but apparently not since I calculated the intersection directly inside the loop and the execution time was still around 25s.
Any ideas why there's a difference in the execution time of my method inside my original code and inside a separated script?
Your original program has two nested loops, with the outer one being run 2500 times and the inner one being run 1000 times in every iteration of the outer loop. So the code inside the inner loop is run 2.5 M times.
Hey so I am just working on some coding homework for my Python class using JES. Our assignment is to take a sound, add some white noise to the background and to add an echo as well. There is a bit more exacts but I believe I am fine with that. There are four different functions that we are making: a main, an echo equation based on a user defined length of time and amount of echos, a white noise generation function, and a function to merge the noises.
Here is what I have so far, haven't started the merging or the main yet.
#put the following line at the top of your file. This will let
#you access the random module functions
import random
#White noise Generation functiton, requires a sound to match sound length
def whiteNoiseGenerator(baseSound) :
noise = makeEmptySound(getLength(baseSound))
index = 0
for index in range(0, getLength(baseSound)) :
sample = random.randint(-500, 500)
setSampleValueAt(noise, index, sample)
return noise
def multipleEchoesGenerator(sound, delay, number) :
endSound = getLength(sound)
newEndSound = endSound +(delay * number)
len = 1 + int(newEndSound/getSamplingRate(sound))
newSound = makeEmptySound(len)
echoAmplitude = 1.0
for echoCount in range (1, number) :
echoAmplitude = echoAmplitude * 0.60
for posns1 in range (0, endSound):
posns2 = posns1 + (delay * echoCount)
values1 = getSampleValueAt(sound, posns1) * echoAmplitude
values2 = getSampleValueAt(newSound, posns2)
setSampleValueAt (newSound, posns2, values1 + values2)
return newSound
I receive this error whenever I try to load it in.
The error was:
Inappropriate argument value (of correct type).
An error occurred attempting to pass an argument to a function.
Please check line 38 of C:\Users\insanity180\Desktop\Work\Winter Sophomore\CS 140\homework3\homework_3.py
That line of code is:
setSampleValueAt (newSound, posns2, values1 + values2)
Anyone have an idea what might be happening here? Any assistance would be great since I am hoping to give myself plenty of time to finish coding this assignment. I have gotten a similar error before and it was usually a syntax error however I don't see any such errors here.
The sound is made before I run this program and I defined delay and number as values 1 and 3 respectively.
Check the arguments to setSampleValueAt; your sample value must be out of bounds (should be within -32768 - 32767). You need to do some kind of output clamping for your algorithm.
Another possibility (which indeed was the error, according to further input) is that your echo will be out of the range of the sample - that is, if your sample was 5 seconds long, and echo was 0.5 seconds long; or the posns1 + delay is beyond the length of the sample; the length of the new sound is not calculated correctly.
I'm in need of a function returning only the significant part of a value with respect to a given error. Meaning something like this:
def (value, error):
""" This function takes a value and determines its significant
accuracy by its error.
It returns only the scientific important part of a value and drops the rest. """
magic magic magic....
return formated value as String.
What i have written so far to show what I mean:
import numpy as np
def signigicant(value, error):
""" Returns a number in a scintific format. Meaning a value has an error
and that error determines how many digits of the
value are signifcant. e.g. value = 12.345MHz,
error = 0.1MHz => 12.3MHz because the error is at the first digit.
(in reality drop the MHz its just to show why.)"""
xx = "%E"%error # I assume this is most ineffective.
xx = xx.split("E")
xx = int(xx[1])
if error <= value: # this should be the normal case
yy = np.around(value, -xx)
if xx >= 0: # Error is 1 or bigger
return "%i"%yy
else: # Error is smaller than 1
string = "%."+str(-xx) +"f"
return string%yy
if error > value: # This should not be usual but it can happen.
return "%g"%value
What I don't want is a function like numpys around or round. Those functions take a value and want to know what part of this value is important. The point is that in general I don't know how many digits are significant. It depends in the size of the error of that value.
Another example:
value = 123, error = 12, => 120
One can drop the 3, because the error is at the size of 10. However this behaviour is not so important, because some people still write 123 for the value. Here it is okay but not perfectly right.
For big numbers the "g" string operator is a usable choice but not always what I need. For e.g.
If the error is bigger than the value.( happens e.g. when someone wants to measure something that does not exist.)
value = 10, error = 100
I still wish to keep the 10 as the value because I done know it any better. The function should return 10 then and not 0.
The stuff I have written does work more or less, but its clearly not effective or elegant in any way. Also I assume this question does concern hundreds of people because every scientist has to format numbers in that way. So I'm sure there is a ready to use solution somewhere but I haven't found it yet.
Probably my google skills aren't good enough but I wasn't able to find a solution to this in two days and now I ask here.
For testing my code I used this the following but more is needed.
errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
for value, error in zip(values, errors):
print "Teste Value: ",value, "Error:", error
print "Result: ", signigicant(value, error)
import math
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits
Example:
>>> errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
>>> values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
>>> map(round_on_error, values, errors)
[12.3, 123123321.0, 0.0, 300000.0, 0.0]
And if you want to keep a value that is inferior to its error
if (value < error)
return value
else
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits