How to get "first-order optimality" with python script - python

I curious about how to get "first-order optimality" value using python script.
For optimatization python has many module like scipy.optimize and openopt. But I confused how to use that module to get first-order optimality
This is sample matlab script to get first-order optimality
[x,resnorm,residual,exitflag,output,lambda]= lsqcurvefit(func,x0,xdata,tdata);
foo = output.firstorderopt %get first-order optimality value
this is some of foo reference from mathworks
here
Thanks for your attention, Happy New Year 2011 :)

Try scipy.linalg.basic.lstsq or scipy.optimize.minpack.curve_fit
Try to write your own code, refer to the tutorials or else search the web for scipy.linalg.basic.lstsq for code samples (there are some out there). If you encounter any issues, post the code and the issue here.

Related

Python3 Search the virtual memory of a running windows process

begin TLDR;
I want to write a python3 script to scan through the memory of a running windows process and find strings.
end TLDR;
This is for a CTF binary. It's a typical Windows x86 PE file. The goal is simply to get a flag from the processes memory as it runs. This is easy with ProcessHacker you can search through the strings in the memory of the running application and find the flag with a regex. Now because I'm a masochistic geek I strive to script out solutions for CTFs (for everything really). Specifically I want to use python3, C# is also an option but would really like to keep all of the solution scripts in python.
Thought this would be a very simple task. You know... pip install some library written by someone that's already solved the problem and use it. Couldn't find anything that would let me do what I need for this task. Here are the libraries I tried out already.
ctypes - This was the first one I used, specifically ReadProcessMemory. Kept getting 299 errors which was because the buffer I was passing in was larger than that section of memory so I made a recursive function that would catch that exception, divide the buffer length by 2 until it got something THEN would read one byte at a time until it hit a 299 error. May have been on the right track there but I wasn't able to get the flag. I WAS able to find the flag only if I knew the exact address of the flag (which I'd get from process hacker). I may make a separate question on SO to address that, this one is really just me asking the community if something already exists before diving into this.
pymem - A nice wrapper for ctypes but had the same issues as above.
winappdbg - python2.x only. I don't want to use python 2.x.
haystack - Looks like this depends on winappdbg which depends on python 2.x.
angr - This is a possibility, Only scratched the surface with it so far. Looks complicated and it's on the to learn list but don't want to dive into something right now that's not going to solve the issue.
volatility - Looks like this is meant for working with full RAM dumps not for hooking into currently running processes and reading the memory.
My plan at the moment is to dive a bit more into angr to see if that will work, go back to pymem/ctypes and try more things. If all else fails ProcessHacker IS opensource. I'm not fluent in C so it'll take time to figure out how they're doing it. Really hoping there's some python3 library I'm missing or maybe I'm going about this the wrong way.
Ended up writing the script using the frida library. Also have to give soutz to rootbsd because his or her code in the fridump3 project helped greatly.

Does two python codes running at the same time access same randomness?

I am running 36 jupyter notebooks on AWS instance. The code generates some experimental data based on some probability distribution. I do realize that I cannot multi-thread as it uses the same randomness. I'm not sure if it is the same case if I run the python script separately.
I would like to know if the python code in the background accessing the same randomness?
Currently I'm using numpy.random. I do realize that I can use os.random to avoid this problem. But I really don't prefer it as I have to do a lot of wrap around to get the necessary distribution I require.
Thanks in advance

Append error in python

I am using JuliaBox to run python code in python 2.
My code is as follows:
l=[]
l.append(5)
And the following is the error I got:
type Array has no field append
But I have used append as given in the python documentation. https://docs.python.org/2.6/tutorial/datastructures.html
Where did I go wrong?
You are using Julia not Python:
I don't think you are obviously doing anything wrong. I can reproduce your problem by clicking New on the JuliaBox.org landing page and selecting Python 2 in the Notebooks subsection of the menu. This creates a new notebook which you would expect to be running against the python kernel and gives you some visual indications that it is running python.
However
In fact, it is not running Python, it is running Julia. You can test this by, for instance simply typing sin(0.3). This would fail in Python, but gives you a result in Julia. Similarly println("Hello world!")
I'm not familiar with IJulia or Juliabox, so can't state categorically whether this is a bug, but it certainly feels like one and is unexpected and counter intuitive behaviour at best.
My final comment is to try a different interpreter - if you want something with a similar look and feel, you could always use iPython directly. As a bonus, you'll be able to use Python 3 instead of being stuck with 2.6!
EDIT
As highlighted by Matt B. in comments, this is a known bug in IJulia
Your Python code is perfectly valid. Try another interpreter.

Is there a way to Pre-Analyze a Python program for naming conflicts?

One of the most frustrating things about programming in Python thus far has been the lack of some kind of "pre-analysis". In Java, for example, a pre-analysis is performed before the actual compilation of a program, in which things like name usage is checked. In other words, if I have called a variable list_one in one area, and say I mispell it as list_on in another area, Java will say "Hey you cant do that, I dont know what list_on is."
Python does not seem to do this, and it is terribly frustrating! I have a program that takes about 15 minutes to run, and the last thing I was to see at 14.5 minutes into it is something like
NameError: name 'list_on' is not defined
Are their any tools available out there can can perform this kind of check before the interpreter actually runs the program? If not, what are some ways to work around this issue?
Have you considered checking your code with something like pyflakes or pylint?
UPDATE
I found a fantastic solution to this problem for those that happen to be emacs users. You can install PyFlakes-Flymake. This is a great tool! It will perform a static analysis of your code on the fly, and highlight trouble areas in red. I suggest using PIP instead of the suggested easy_install. Other than that, it is pretty simple to get it up and running. And well worth the effort!

Embedded Python - Blocking operations in time module

I'm developing my own Python code interpreter using the Python C API, as described in the Python documentation. I've taken a look on the Python source code and I tried to follow the same steps that are carried out in the standard interpreter when executing a py file. These steps (sequence of C API function calls) are basically:
PyRun_AnyFileExFlags()
PyRun_SimpleFileExFlags()
PyRun_FileExFlags()
PyArena_New()
PyParser_ASTFromFile()
run_mod()
PyAST_Compile()
PyEval_EvalCode()
PyEval_EvalCodeEx()
PyThreadState_GET()
PyFrame_New()
PyEval_EvalFrameEx()
The only difference in my code is that I do manually the AST compilation, frame creation, etc. and then I call PyEval_EvalFrame.
With this, I am able to execute an arbitrary .py file with my program, as if it were the normal Python interpreter. My problem comes when the code that my program is executing makes use of the time module: all time module operations get blocked in the GIL! For example, if the Python code calls time.sleep(1), this call is blocked and never gets executed.
Obviously I am doing something wrong that blocks the GIL (and therefore blocks the time module) but I dont know how to correct it. The last statement in my code where I have control is in PyEval_EvalFrameEx, and from that point on, everything runs "as in regular Python interpreter", I think.
Anybody had a similar problem? What am I doing wrong, so that I block the time module?
Hope somebody can help me...
Thanks for your time. Best regards,
R.
You need to provide more detail.
How does your interpreter's behavior differ from the standard interpreter?
If you just want to run arbitrary source files, why are you not calling one of the higher level interfaces, like PyRun_SimpleFile? Did your code call Py_Initialize?

Categories

Resources