CMOS XOR propagation delay in Python - python

I have been struggling with something relatively simple but haven't yet figure out a good way to solve it.
I need to simulate, at a very high level, XOR gates. I have two streams of 0/1 and want to do piece-wise XOR and that's the easy bit. Now I wanted to add a limitation of real life CMOS XOR gates, simply the propagation delay.
This means that if the input change so quickly that the XOR would have to transition faster than a certain delay, the XOR output would not transition, therefore missing some of the transitions at the output.
Googling a bit, I think I found a MATLAB tool that does that (https://www.mathworks.com/help/sps/ref/cmosxor.html) and I would like something similar to put into my python codes.
Any help?
Thanks a lot!

For high level simulation, you could apply a time wheel. The time wheel has a fixed number of slots corresponding to multiples of a basic time unit (fractions of a nanosecond). Attached to each slot is a list of events scheduled for this time.
An event is a transition of an input or output line. The simulation algorithm works around the wheel and calculates subsequent events. These are stored in their respective slot lists. Time wraps around at the end of the wheel horizon. Events are removed from their lists after processing.
Example:
Input A of an XOR gate goes from 0 to 1 at time 8. This causes the output F of the gate to toggle its polarity 2 time slots later.
It is possible to use different delays depending on the direction of the transition (0->1 or 1->0). Typically, the delay also depends on the number of inputs driven by the gate output. The granularity or time accuracy of the simulation is determined by the number of slots in the wheel. The more slots the smaller is the timestep per slot. It is essential that the wheel horizon is big enough to prevent double wraps.
The following depicts the difference between discrete and analog simulation:
If you require the accuracy of analog simulation, you can either resort to a fully analog simulator, or you could assume exponential transitions and calculate the time when certain threshold are reached.

Related

Creating a model/optimization method to optimize a signal

Suppose I have a system that is driven by a signal comprising 3 voltage levels (let's say -V1, 0, V1). I need to determine the composition of the signal that most accurately produces the desired output. The output is a single number that represents the current state of the system. The number of possible permutations for such a signal are too high to brute-force and find the global minimum i.e. exploring the entirety of the search space is impossible. However, I do have a model that simulates the system so I can still process several possible options. How can I find the best signal to produce the desired output (in other words, the signal that drives the system to the desired state)?
One method that I have right now involves producing a starting set (i.e. a small subset of the search space) of signals that align with a set of constraints, finding the signal that produces output closest to the desired output, and making modifications to this signal (i.e. fine tuning) in order to obtain the desired output. This final step is difficult for me, as I am manually doing this. One idea to automate this final step is to parametrize all possible modifications (for instance, parameter x1 = 1 adds a single -V1 'frame' to the signal, x1 = 2 adds two such frames, x1 = -1 removes a -V1 frame, and so on), and step through the set of possible modifications. But again, there's a lot of possibilities. To improve upon this, I explored the effect that modifications have on the system output. The effects of these modifications look somewhat predictable (the distributions of the changes in output they produce generally follow Gaussian distributions). But I'm not sure how to proceed from here. What models/schemes would you suggest I use? Can I use information from the distributions of changes produced by modifications to intelligently fine-tune the signal? How do I account for outliers (i.e. cases wherein the modification(s) to an initial signal produce a change in output that lies in the tail end of the distribution?
Edit: Forgot to mention, but the constraints on the signal would be length (the number of frames/steps in the signal must be less than or equal to a finite positive integer, N) and total potential (i.e. the sum of the voltages in the signal should equal an integer, V).

Optimizing a physical problem by coupling Python to simulation

I am trying to solve a physical problem by coupling a simulation software to Python. Basically, I need to find the values of length and diameter of each of the pipe sections in the picture below (line segment between any 2 black dots is a pipe section) such that fluid flow from point 0 reaches points 1-5 at the same time instant. I give some starting values for the length and diameter of each of the pipe sections and the simulation software solves to check if the fluid reaches the points 1-5 at the same time instant. If not, the lengths and diameters of the pipe section(s) need to be changed to ensure this. Flow not reaching points 1-5 at the same instant is known as flow imbalance, and ideally I need to reduce this imbalance to zero.
Now my question is - can I couple Python to the simulation software to suggest values of the length and diameter of the various pipe sections to ensure that flow reaches points 1-5 at the same time instant? I already know how to run the simulation software through a python script, and how to extract the flow imbalance result from the software. All I want to know is does a library/ function exist in Python that can iteratively suggest values for the length and diameter of pipe section(s) such that flow imbalance reduces after every iteration?
Please know that it is not possible to frame an objective function that will consider the length and diameter of the pipe section(s) and try to minimize or maximize it to eliminate flow imbalance. Running the software simulation is the only way to actually check this flow imbalance. I know that optimization libraries exist such as scipy.optimize, but AFAIK they work on an objective function. I could not find anything that would suggest values for the length and diameter of pipe sections depending on how large the flow imbalance is after every iteration.
So you can write a function
def imbalance(pipe_diameters):
times = get_pipe_times(pipe_diameters)
return times - np.mean(times)
Then you can use
from scipy.optimize import leastsq
x0 = uniform_diameter_pipes()
diameters = leastsq(imbalance, x0)
If the number of parameters is more than the number of outputs then you may have to use minimize as mentioned in the comments. In that case your imbalance must return a scalar.
def imbalance(pipe_diameters):
times = get_pipe_times(pipe_diameters)
return np.var(times) # calculate variance, could be other metric as well

How can I statistically compare a lightcurve data set with the simulated lightcurve?

With python I want to compare a simulated light curve with the real light curve. It should be mentioned that the measured data contain gaps and outliers and the time steps are not constant. The model, however, contains constant time steps.
In a first step I would like to compare with a statistical method how similar the two light curves are. Which method is best suited for this?
In a second step I would like to fit the model to my measurement data. However, the model data is not calculated in Python but in an independent software. Basically, the model data depends on four parameters, all of which are limited to a certain range, which I am currently feeding mannualy to the software (planned is automatic).
What is the best method to create a suitable fit?
A "Brute-Force-Fit" is currently an option that comes to my mind.
This link "https://imgur.com/a/zZ5xoqB" provides three different plots. The simulated lightcurve, the actual measurement and lastly both together. The simulation is not good, but by playing with the parameters one can get an acceptable result. Which means the phase and period are the same, magnitude is in the same order and even the specular flashes should occur at the same period.
If I understand this correctly, you're asking a more foundational question that could be better answered in https://datascience.stackexchange.com/, rather than something specific to Python.
That said, as a data science layperson, this may be a problem suited for gradient descent with a mean-square-error cost function. You initialize the parameters of the curve (possibly randomly), then calculate the square error at your known points.
Then you make tiny changes to each parameter in turn, and calculate how the cost function is affected. Then you change all the parameters (by a tiny amount) in the direction that decreases the cost function. Repeat this until the parameters stop changing.
(Note that this might trap you in a local minimum and not work.)
More information: https://towardsdatascience.com/implement-gradient-descent-in-python-9b93ed7108d1
Edit: I overlooked this part
The simulation is not good, but by playing with the parameters one can get an acceptable result. Which means the phase and period are the same, magnitude is in the same order and even the specular flashes should occur at the same period.
Is the simulated curve just a sum of sine waves, and are the parameters just phase/period/amplitude of each? In this case what you're looking for is the Fourier transform of your signal, which is very easy to calculate with numpy: https://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html

Genetic Algorithm in Optimization of Events

I'm a data analysis student and I'm starting to explore Genetic Algorithms at the moment. I'm trying to solve a problem with GA but I'm not sure about the formulation of the problem.
Basically I have a state of a variable being 0 or 1 (0 it's in the normal range of values, 1 is in a critical state). When the state is 1 I can apply 3 solutions (let's consider Solution A, B and C) and for each solution I know the time that the solution was applied and the time where the state of the variable goes to 0.
So I have for the problem a set of data that have a critical event at 1, the solution applied and the time interval (in minutes) from the critical event to the application of the solution, and the time interval (in minutes) from the application of the solution until the event goes to 0.
I want with a genetic algorithm to know which is the best solution for a critical event and the fastest one. And if it is possible to rank the solutions acquired so if in the future on solution can't be applied I can always apply the second best for example.
I'm thinking of developing the solution in Python since I'm new to GA.
Edit: Specifying the problem (responding to AMack)
Yes is more a less that but with some nuances. For example the function A can be more suitable to make the variable go to F but because exist other problems with the variable are applied more than one solution. So on the data that i receive for an event of V, sometimes can be applied 3 ou 4 functions but only 1 or 2 of them are specialized for the problem that i want to analyze. My objetive is to make a decision support on the solution to use when determined problem appear. But the optimal solution can be more that one because for some event function A acts very fast but in other case of the same event function A don't produce a fast response and function C is better in that case. So in the end i pretend a solution where is indicated what are the best solutions to the problem but not only the fastest because the fastest in the majority of the cases sometimes is not the fastest in the same issue but with a different background.
I'm unsure of what your question is, but here are the elements you need for any GA:
A population of initial "genomes"
A ranking function
Some form of mutation, crossing over within the genome
and reproduction.
If a critical event is always the same, your GA should work very well. That being said, if you have a different critical event but the same genome you will run into trouble. GA's evolve functions towards the best possible solution for A Set of conditions. If you constantly run the GA so that it may adapt to each unique situation you will find a greater degree of adaptability, but have a speed issue.
You have a distinct advantage using python because string manipulation (what you'll probably use for the genome) is easy, however...
python is slow.
If the genome is short, the initial population is small, and there are very few generations this shouldn't be a problem. You lose possibly better solutions that way but it will be significantly faster.
have fun...
You should take a look at the GARAGe Michigan State. They are a GA research group with a fair number of resources in terms of theory, papers, and software that should provide inspiration.
To start, let's make sure I understand your problem.
You have a set of sample data, each element containing a time series of a binary variable (we'll call it V). When V is set to True, a function (A, B, or C) is applied which returns V to it's False state. You would like to apply a genetic algorithm to determine which function (or solution) will return V to False in the least amount of time.
If this is the case, I would stay away from GAs. GAs are typically used for some kind of function optimization / tuning. In general, the underlying assumption is that what you permute is under your control during the algorithm's application (i.e., you are modifying parameters used by the algorithm that are independent of the input data). In your case, my impression is that you just want to find out which of your (I assume) static functions perform best in a wide variety of cases. If you don't feel your current dataset provides a decent approximation of your true input distribution, you can always sample from it and permute the values to see what happens; however, this would not be a GA.
Having said all of this, I could be wrong. If anyone has used GAs in verification like this, please let me know. I'd certainly be interested in learning about it.

Parallel many dimensional optimization

I am building a script that generates input data [parameters] for another program to calculate. I would like to optimize the resulting data. Previously I have been using the numpy powell optimization. The psuedo code looks something like this.
def value(param):
run_program(param)
#Parse output
return value
scipy.optimize.fmin_powell(value,param)
This works great; however, it is incredibly slow as each iteration of the program can take days to run. What I would like to do is coarse grain parallelize this. So instead of running a single iteration at a time it would run (number of parameters)*2 at a time. For example:
Initial guess: param=[1,2,3,4,5]
#Modify guess by plus minus another matrix that is changeable at each iteration
jump=[1,1,1,1,1]
#Modify each variable plus/minus jump.
for num,a in enumerate(param):
new_param1=param[:]
new_param1[num]=new_param1[num]+jump[num]
run_program(new_param1)
new_param2=param[:]
new_param2[num]=new_param2[num]-jump[num]
run_program(new_param2)
#Wait until all programs are complete -> Parse Output
Output=[[value,param],...]
#Create new guess
#Repeat
Number of variable can range from 3-12 so something such as this could potentially speed up the code from taking a year down to a week. All variables are dependent on each other and I am only looking for local minima from the initial guess. I have started an implementation using hessian matrices; however, that is quite involved. Is there anything out there that either does this, is there a simpler way, or any suggestions to get started?
So the primary question is the following:
Is there an algorithm that takes a starting guess, generates multiple guesses, then uses those multiple guesses to create a new guess, and repeats until a threshold is found. Only analytic derivatives are available. What is a good way of going about this, is there something built already that does this, is there other options?
Thank you for your time.
As a small update I do have this working by calculating simple parabolas through the three points of each dimension and then using the minima as the next guess. This seems to work decently, but is not optimal. I am still looking for additional options.
Current best implementation is parallelizing the inner loop of powell's method.
Thank you everyone for your comments. Unfortunately it looks like there is simply not a concise answer to this particular problem. If I get around to implementing something that does this I will paste it here; however, as the project is not particularly important or the need of results pressing I will likely be content letting it take up a node for awhile.
I had the same problem while I was in the university, we had a fortran algorithm to calculate the efficiency of an engine based on a group of variables. At the time we use modeFRONTIER and if I recall correctly, none of the algorithms were able to generate multiple guesses.
The normal approach would be to have a DOE and there where some algorithms to generate the DOE to best fit your problem. After that we would run the single DOE entries parallely and an algorithm would "watch" the development of the optimizations showing the current best design.
Side note: If you don't have a cluster and needs more computing power HTCondor may help you.
Are derivatives of your goal function available? If yes, you can use gradient descent (old, slow but reliable) or conjugate gradient. If not, you can approximate the derivatives using finite differences and still use these methods. I think in general, if using finite difference approximations to the derivatives, you are much better off using conjugate gradients rather than Newton's method.
A more modern method is SPSA which is a stochastic method and doesn't require derivatives. SPSA requires much fewer evaluations of the goal function for the same rate of convergence than the finite difference approximation to conjugate gradients, for somewhat well-behaved problems.
There are two ways of estimating gradients, one easily parallelizable, one not:
around a single point, e.g. (f( x + h directioni ) - f(x)) / h;
this is easily parallelizable up to Ndim
"walking" gradient: walk from x0 in direction e0 to x1,
then from x1 in direction e1 to x2 ...;
this is sequential.
Minimizers that use gradients are highly developed, powerful, converge quadratically (on smooth enough functions).
The user-supplied gradient function
can of course be a parallel-gradient-estimator.
A few minimizers use "walking" gradients, among them Powell's method,
see Numerical Recipes p. 509.
So I'm confused: how do you parallelize its inner loop ?
I'd suggest scipy fmin_tnc
with a parallel-gradient-estimator, maybe using central, not one-sided, differences.
(Fwiw,
this
compares some of the scipy no-derivative optimizers on two 10-d functions; ymmv.)
I think what you want to do is use the threading capabilities built-in python.
Provided you your working function has more or less the same run-time whatever the params, it would be efficient.
Create 8 threads in a pool, run 8 instances of your function, get 8 result, run your optimisation algo to change the params with 8 results, repeat.... profit ?
If I haven't gotten wrong what you are asking, you are trying to minimize your function one parameter at the time.
you can obtain it by creating a set of function of a single argument, where for each function you freeze all the arguments except one.
Then you go on a loop optimizing each variable and updating the partial solution.
This method can speed up by a great deal function of many parameters where the energy landscape is not too complex (the dependency between the parameters is not too strong).
given a function
energy(*args) -> value
you create the guess and the function:
guess = [1,1,1,1]
funcs = [ lambda x,i=i: energy( guess[:i]+[x]+guess[i+1:] ) for i in range(len(guess)) ]
than you put them in a while cycle for the optimization
while convergence_condition:
for func in funcs:
optimize fot func
update the guess
check for convergence
This is a very simple yet effective method of simplify your minimization task. I can't really recall how this method is called, but A close look to the wikipedia entry on minimization should do the trick.
You could do parallel at two parts: 1) parallel the calculation of single iteration or 2) parallel start N initial guessing.
On 2) you need a job controller to control the N initial guess discovery threads.
Please add an extra output on your program: "lower bound" that indicates the output values of current input parameter's decents wont lower than this lower bound.
The initial N guessing thread can compete with each other; if any one thread's lower bound is higher than existing thread's current value, then this thread can be dropped by your job controller.
Parallelizing local optimizers is intrinsically limited: they start from a single initial point and try to work downhill, so later points depend on the values of previous evaluations. Nevertheless there are some avenues where a modest amount of parallelization can be added.
As another answer points out, if you need to evaluate your derivative using a finite-difference method, preferably with an adaptive step size, this may require many function evaluations, but the derivative with respect to each variable may be independent; you could maybe get a speedup by a factor of twice the number of dimensions of your problem. If you've got more processors than you know what to do with, you can use higher-order-accurate gradient formulae that require more (parallel) evaluations.
Some algorithms, at certain stages, use finite differences to estimate the Hessian matrix; this requires about half the square of the number of dimensions of your matrix, and all can be done in parallel.
Some algorithms may also be able to use more parallelism at a modest algorithmic cost. For example, quasi-Newton methods try to build an approximation of the Hessian matrix, often updating this by evaluating a gradient. They then take a step towards the minimum and evaluate a new gradient to update the Hessian. If you've got enough processors so that evaluating a Hessian is as fast as evaluating the function once, you could probably improve these by evaluating the Hessian at every step.
As far as implementations go, I'm afraid you're somewhat out of luck. There are a number of clever and/or well-tested implementations out there, but they're all, as far as I know, single-threaded. Your best bet is to use an algorithm that requires a gradient and compute your own in parallel. It's not that hard to write an adaptive one that runs in parallel and chooses sensible step sizes for its numerical derivatives.

Categories

Resources