understanding encoding of exe/dll files - python

I recently tried to understand how sockets module works in python, I opened the source code and by tracing socket class I found out it uses something like _socket.socket. When i scroll up and find import _socket i traced it down and found out that the module is located in another folder named DLLs(i have no idea how but i do know that you can import files from python install location no matter where your file is located, but how?cool if you could answer this doubt too)so opening the file with notepad(it had no default extension association)tells me that it has an awkward encoding. Here's the first few lines in _socket.pyd :
MZ ÿÿ ¸ # º ´ Í!¸LÍ!This program cannot be run in DOS mode.
$ jâò.ƒã¡.ƒã¡.ƒã¡'ûp¡(ƒã¡B÷â ,ƒã¡B÷æ "ƒã¡B÷ç &ƒã¡B÷à -ƒã¡÷÷â ,ƒã¡uëâ )ƒã¡.ƒâ¡”ƒã¡÷÷î /ƒã¡÷÷ã /ƒã¡÷÷¡/ƒã¡÷÷á /ƒã¡Rich.ƒã¡ PE d† ;3` ð "  z ¤ ¨(  €  ` ð= `      põ P Àõ ´ # 0 €
° P ¸ ô¡ T P¢ 8  .text ny  z `.rdata ¬y z ~ # #.data (   ø # À.pdata €
0  # #.rsrc #
 # #.reloc ¸ P # B H‰\$H‰t$UWATAVAWH¬$ÿÿÿHìð H‹àÿ H3ÄH‰…è ¹ HT$ ÿf …À…>, H
ÿ9„ 3ÿƒ=è ÿDg…„ 3ÒÇD$ A¸ H‰|$,HL$4è5( 3Àf‰}6A°‰E8W3Éÿ0€ A°A‹ÔH‹Èÿ!€ A°W H‹Èÿ€ W#ÇD$$ L‹ÀD‰d$(HL$ fD‰e4ÿß ‹Ï…À•Á‰
Z H‹£ƒ H
anyone have any idea how do i decode this to simple python code(i only know that .pyd files are DLL files but in python format)? I also found out from hours of googling that DLL and EXE files have same encoding, so it would be cool if anyone could give me the link for a decoding tool or at least give me a table of this encoding's characters so i can decode it on my own.

DLL's and EXE's files are all "binary" formats. On Windows, this is the PE format. It is compiled machine code and cannot be reverted back to (nor was it started from) python code. Python supports calling Python extensions that are written in C, but called via Python. The socket library in Python is all written in C, and Python knows how to call into it.
Too look at the socket's code, you'll need to go find the corresponding C source file in the CPython repository. Alternatively, you can use a dissassembler like IDA Pro or Ghidra to give you an assembly representation, though if you don't yet understand binary formats, this may not be of much use. Ghidra (and HexRays for IDA Pro) will also attempt to Decompile the assembly giving you an approximation of the original source, but without variable names and inferred types and such.
But if you are looking for the python code that sits behind _socket, none exists.
Program Differences
Compiled Languages
These languages take source code (C, C++, etc) and turn it into Machine Code, which is most directly represented by an Assembly language. The program runs natively on the host machine, meaning it doesn't need any sort of interpreter. It's in a format the OS understands. The original source code is lost in that there is no direct mapping back to the original code. Inferences can be made with advanced decompilers but they are often imperfect and give some general guesses as to what the original source code looked liked. But there is no encoding such that you can parse out the source code from the Binary format.
Interpretive Languages
These languages run an interpreter (which is a native program in a format the OS understands, i.e. PE) which will interpret source code and dynamically turn it into Machine code the processor understand. This is how Python works and is why the source code is inside the program you run. But you can only run the Python code through a Python interpreter.
Managed Languages
These are a bit of a hybrid. They have a compilation step that takes source code and converts it into byte code. This byte code is then run through an interpreter that converts it down into Machine code. So you still need an interpreter (or VM is the more common term) that can run the byte-code, but the source code itself does not have to be present. Many of these can also be decompiled and may give better output than compiled languages, but it's simply inferences made from the underlying code, and not the actual source code that was used to build the binary.
Python can also behave like a managed language in that it's interpretation compiles the source into a byte code representation. Then it acts like a VM in that it executes that byte code. This is what the .pyc files are. The byte code representations of their corresponding .py files.

Python itself is a program, and it runs on your computer. Python code is read and interpreted, and ultimately executed by executing code in your computer's native instruction set. Well, Python is set up so that it can import and run code that is already in the form of "native" instructions (in this case, written in C and compiled to machine code).
To get a feel for how this works, take a look at the official python.org documentation: Extending Python with C or C++. Enjoy!

Related

Transferring data from a C buffer to Python for plotting with Matplotlib in Visual Studio 2019

I have C code successfully running in Visual Studio 2019 that fills a buffer with real time data from an FPGA. It is declared as...
unsigned char *pBuffer;
...in C. I can see that the data is correct in a memory watch window, at the address of pBuffer.
I also have installed Python 3.7 in Visual Studio and am successfully plotting with Matplotlib using python.h from a Python array.
So my question is how do I transfer the data from the C buffer to a python array for plotting??
I have looked at ctypes and since my main code is in C, it does not make much sense to go to Python from C then call C again. I have looked at bytes(), bytearray() and memoryview(). Memoryview appeals to me, because it does not copy data and this plotting needs to be very fast as the data comes in very fast. (think oscilloscope) But it does not seem to be real physical addresses that it works with, rather some kind of identifier that does not correspond to any memory location where my data is. I simply want to plot the data that I know exists in a C buffer (1D array) at a specific address. Python seems to be very restrictive.
I can't seem to get anything to do what I want to do, since Python disallows reading data from some specific memory location apparently. This being the case, I wonder how it might examine memory to display the content in any way, let alone transfer the content to a Python array. How is this done? And yes I am a Pythonic Newbie. Thanks for any help or suggestions.
My first idea would be to pass the data in a file. It will add a few milliseconds to the interchange, but that might not even be perceptible after loading the Python interpreter and the matplotlib module.
If you needed more immediate response, maybe a local loopback network connection would be useful. That's more trouble on the C side than on the Python side, especially on Windows.
As for reading directly from memory, you're right. Standard Python won't do that. You could use C-based modules to do whatever you're doing now from a C main program to get data out of your FPGA. If it's all stock Windows API stuff, you might want to take a look at Mark Hammond's PyWin32 module. That can be installed from the Python Package Index using pip install pywin32 from an elevated command prompt. It might support the APIs you need. If it does, then you might not need a separate C program to get the data out.
PS: Another option for interprocess communication is a named pipe. Windows named pipes can be opened with PyWin32. See the top-vote answer for this SO question.

Micropython 1.9.3 - How to compile .py #micropython.native code into .mpy?

I'm on Micropython 1.9.3. I know how to use mpy-cross to turn a .py into a compiled python .mpy that can be executed by the Micropython virtual machine.
The problem is that if I try to compile using #micropython.native i.e. compile the Python script to native code instead of bytecode, I get an error:
../../mpy-cross/mpy-cross -o build/frozen_mpy/./frozentest.mpy -s frozentest.py frozentest.py
ValueError: can only save bytecode
On the following .py
#micropython.native
def native_add(a,b):
return (a+b)
c = native_add(2342,4542)
QUESTION
Is it not possible to embed native code in .mpy format? Did I miss some option in mpy-cross/mpconfigport.h?
Only thing I changed is:
#define MICROPY_EMIT_THUMB (0) // changed it to 1
I got the answer from someone on micropython forum:
You cannot. It is an TODO item. If you want to put it into flash
memory, you can embed it as frozen source code in some ports. Just put
these files in a subdirectory called scripts, like esp8266/scripts or
stm32/scripts. But it will still be compiled at import time and
consume RAM. Typically, that should not hurt, when this variant of
coding is used only for small, time-critical sections of the code.

strange Python version-dependent behavior with writing sorted output using heapq.merge

Some of the gsutil users have reported a failure when running gsutil rsync, which I've tracked down to being apparently a Python 2.7.8-specific problem: we write sorted lists of the source and destination directories being synchronized in binary mode ('w+b'), and then read those lists back in, also in binary mode ('rb'). This works fine under Python 2.6.x and Python 2.7.3, but under Python 2.7.8 the output ends up in a garbled-looking binary format, which then doesn't parse correctly when being read back in.
If I switch the output to use 'w+' mode instead the problem goes away. But (a) I think I do want to write in binary mode, since these files can contain Unicode, and (b) I'd like to understand why this is a Python version-dependent problem.
Does anyone have any ideas about why this might be happening?
FYI, I tried to reproduce this problem with a short program that just writes a file in binary mode and reads it back in binary mode, but the problem doesn't repro with this program. I'm wondering if there might be something about the heapq.merge implementation that changed in Python 2.7.8 that might explain this problem (we sort in batches, and the individual sorted files are fine; its the output from heapq.merge that gets garbled in binary mode under Python 2.7.8).
Any suggsetions/thoughts would be appreciated.
It sounds to me as if the file object hasn't properly been flushed or no seek has been done between a read and write action, or vice versa. A binary object would be more susceptible to this as the OS won't be doing newline translation either. At a C level undefined behaviour can be triggered and uninitialised memory is then read or written. There is a Python issue about this at http://bugs.python.org/issue1394612.
Why this changed in a Python minor version is interesting however, and if you have a reproducible case you should definitively report it to the Python project issue tracker.
If you are just writing Unicode, then encode that Unicode to a UTF encoding; you do not need to use a binary file mode for that as UTF-8 will never use newline bytes in other codepoints.
Alternatively, use the io.open() function to open a Unicode-aware file object for your data.

Can python source code executed using PyRun_SimpleString be extracted?

I am developing a software based on embedded Python and C++. I want to secure some of my python code and prevent people from copying it.
For now I am using PyRun_SimpleString to execute the python code and the string is generated using my C++ code.
If I use this method, will it secure the Python code from being copied?
So, as I understand it the actual program will not exist in the executable, either in python source form, nor in 'marshalled' form (basically .pyc image) or compiled form, though I guess it will exist in some encrypted or obfuscated form which is converted into python source at run time.
This definitely makes it harder to extract the code, but an attacker who can trace the code as it runs will be able to catch calls to PyRun_SimpleString and obtain the plain source.
It's a question of degree - how hard you want to work to make the job harder for the attacker.
You might want to look into the 'frozen module' facility which is in the python source. This basically allows '.pyc' images to compiled in as byte arrays, and imported at runtime by un-marshalling them. So there's never any plain text source, but there is the .pyc image which is reasonably easy to analyze if you find it in the image. Take it one step further and obfuscate the pyc image - now attacker needs to analyze or trace past the de-obfuscation, and still won't see plain source code.

Why don't scripting languages output Unicode to the Windows console?

The Windows console has been Unicode aware for at least a decade and perhaps as far back as Windows NT. However for some reason the major cross-platform scripting languages including Perl and Python only ever output various 8-bit encodings, requiring much trouble to work around. Perl gives a "wide character in print" warning, Python gives a charmap error and quits. Why on earth after all these years do they not just simply call the Win32 -W APIs that output UTF-16 Unicode instead of forcing everything through the ANSI/codepage bottleneck?
Is it just that cross-platform performance is low priority? Is it that the languages use UTF-8 internally and find it too much bother to output UTF-16? Or are the -W APIs inherently broken to such a degree that they can't be used as-is?
UPDATE
It seems that the blame may need to be shared by all parties. I imagined that the scripting languages could just call wprintf on Windows and let the OS/runtime worry about things such as redirection. But it turns out that even wprintf on Windows converts wide characters to ANSI and back before printing to the console!
Please let me know if this has been fixed since the bug report link seems broken but my Visual C test code still fails for wprintf and succeeds for WriteConsoleW.
UPDATE 2
Actually you can print UTF-16 to the console from C using wprintf but only if you first do _setmode(_fileno(stdout), _O_U16TEXT).
From C you can print UTF-8 to a console whose codepage is set to codepage 65001, however Perl, Python, PHP and Ruby all have bugs which prevent this. Perl and PHP corrupt the output by adding additional blank lines following lines which contain at least one wide character. Ruby has slightly different corrupt output. Python crashes.
UPDATE 3
Node.js is the first scripting language that shipped without this problem straight out of the box.
The Python dev team slowly came to realize this was a real problem since it was first reported back at the end of 2007 and has seen a huge flurry of activity to fully understand and fully fix the bug in 2016.
The main problem seems to be that it is not possible to use Unicode on Windows using only the standard C library and no platform-dependent or third-party extensions. The languages you mentioned originate from Unix platforms, whose method of implementing Unicode blends well with C (they use normal char* strings, the C locale functions, and UTF-8). If you want to do Unicode in C, you more or less have to write everything twice: once using nonstandard Microsoft extensions, and once using the standard C API functions for all other operating systems. While this can be done, it usually doesn't have high priority because it's cumbersome and most scripting language developers either hate or ignore Windows anyway.
At a more technical level, I think the basic assumption that most standard library designers make is that all I/O streams are inherently byte-based on the OS level, which is true for files on all operating systems, and for all streams on Unix-like systems, with the Windows console being the only exception. Thus the architecture many class libraries and programming language standard have to be modified to a great extent if one wants to incorporate Windows console I/O.
Another more subjective point is that Microsoft just did not enough to promote the use of Unicode. The first Windows OS with decent (for its time) Unicode support was Windows NT 3.1, released in 1993, long before Linux and OS X grew Unicode support. Still, the transition to Unicode in those OSes has been much more seamless and unproblematic. Microsoft once again listened to the sales people instead of the engineers, and kept the technically obsolete Windows 9x around until 2001; instead of forcing developers to use a clean Unicode interface, they still ship the broken and now-unnecessary 8-bit API interface, and invite programmers to use it (look at a few of the recent Windows API questions on Stack Overflow, most newbies still use the horrible legacy API!).
When Unicode came out, many people realized it was useful. Unicode started as a pure 16-bit encoding, so it was natural to use 16-bit code units. Microsoft then apparently said "OK, we have this 16-bit encoding, so we have to create a 16-bit API", not realizing that nobody would use it. The Unix luminaries, however, thought "how can we integrate this into the current system in an efficient and backward-compatible way so that people will actually use it?" and subsequently invented UTF-8, which is a brilliant piece of engineering. Just as when Unix was created, the Unix people thought a bit more, needed a bit longer, has less financially success, but did it eventually right.
I cannot comment on Perl (but I think that there are more Windows haters in the Perl community than in the Python community), but regarding Python I know that the BDFL (who doesn't like Windows as well) has stated that adequate Unicode support on all platforms is a major goal.
Small contribution to the discussion - I am running Czech localized Windows XP, which almost everywhere uses CP1250 code page. Funny thing with console is though that it still uses legacy DOS 852 code page.
I was able to make very simple perl script that prints utf8 encoded data to console using:
binmode STDOUT, ":utf8:encoding(cp852)";
Tried various options (including utf16le), but only above settings printed accented Czech characters correctly.
Edit: I played a little more with the problem and found Win32::Unicode. The module exports function printW that works properly both in output and redirected:
use utf8;
use Win32::Unicode;
binmode STDOUT, ":utf8";
printW "Příliš žluťoučký kůň úpěl ďábelské ódy";
I have to unask many of your questions.
Did you know that
Windows uses UTF-16 for its APIs, but still defaults to the various "fun" legacy encodings (e.g. Windows-1252, Windows-1251) in userspace, including file names, differently for the many localisations of Windows?
you need to encode output, and picking the appropriate encoding for the system is achieved by the locale pragma, and that there is the a POSIX standard called locale on which this is built, and Windows is incompatible with it?
Perl already supported the so-called "wide" APIs once?
Microsoft managed to adapt UTF-8 into their codepage system of character encoding, and you can switch your terminal by issuing the appropriate chcp 65001 command?
Michael Kaplan has series of blog posts about the cmd console and Unicode that may be informative (while not really answering your question):
Conventional wisdom is retarded, aka What the ##%&* is _O_U16TEXT?
Anyone who says the console can't do Unicode isn't as smart as they think they are
A confluence of circumstances leaves a stone unturned...
PS: Thanks #Jeff for finding the archive.org links.
Are you sure your script would output Unicode on some other platform correctly? "wide character in print" warning makes me very suspicious.
I recommend to look over this overview
Why on earth after all these years do
they not just simply call the Win32 -W
APIs that output UTF-16 Unicode
instead of forcing everything through
the ANSI/codepage bottleneck?
Because Perl and Python aren't Windows programs. They're Unix programs that happen to have been mostly ported to Windows. As such, they don't like to call Win32 functions unless necessary. For byte-based I/O, it's not necessary; this can be done with the Standard C Libary. UTF-16-based I/O is a special case.
Or are the -W APIs inherently broken
to such a degree that they can't be
used as-is?
I wouldn't say that the -W APIs are inherently broken as much as I'd say that Microsoft's approach to Unicode in C(++) is inherently broken.
No matter how much certain Windows developers insist that programs should use wchar_t instead of char, there are just too many barriers to switching:
Platform dependence:
The use of UTF-16 wchar_t on Windows and UTF-32 wchar_t elsewhere. (The new char16_t and char32_t types may help.)
The non-standardness of UTF-16 filename functions like _wfopen, _wstat, etc. limits the ability to use wchar_t in cross-platform code.
Education. Everbody learns C with printf("Hello, world!\n");, not wprintf(L"Hello, world!\n");. The C textbook I used in college never even mentioned wide characters until Appendix A.13.
The existing zillions of lines of code that use char* strings.
For Perl to fully support Windows in this way, every call to print printf say warn and die has to be modified.
Is this Windows?
Which version of Windows?
Perl still mostly works on Windows 95
Is this going to the console, or somewhere else.
Once you have that determined, you then have to use a completely different set of API functions.
If you really want to see everything involved in doing this properly, have a look at the source of Win32::Unicode::Console.
On Linux, OpenBSD, FreeBSD and similar OS's you can usually just call binmode on the STDOUT and STDERR file handles.
binmode STDOUT, ':utf8';
binmode STDERR, ':utf8';
This assumes that the terminal is using the UTF-8 encoding.
For Python, the relevant issue in tracker is http://bugs.python.org/issue1602 (as said in comments). Note that it is open for 7 years. I tried to publish a working solution (based on information in the issue) as a Python package: https://github.com/Drekin/win-unicode-console, https://pypi.python.org/pypi/win_unicode_console.
Unicode issues in Perl
covers how the Win32 console works with Perl and the transcoding that happens behind the scene from ANSI to Unicode;albeit not just a Perl issue but affects other languages

Categories

Resources