i am trying to get obscured email that was obscured by firewall.
i found the solution in python but i don't know to do this in dart or flutter.
here is the python code
r = int(encodedString[:2],16)
email = ''.join([chr(int(encodedString[i:i+2], 16) ^ r) for i in range(2, len(encodedString), 2)])
return email
print cfDecodeEmail('543931142127353935313e352e7a373b39') # usage
In Python, encodedString[:2]/encodedString[i:i+2] extract two characters from encodedString. The Dart equivalent (assuming ASCII characters) would be encodedString.substring(0, 2) and encodedString(i, i + 2) respectively.
The equivalent of Python's ''.join(list) in Dart is list.join().
The equivalent of a Python's list comprehensions ([i for i in items]) in Dart is collection-for: [for (var i in items) i].
The equivalent of Python's for i in range(2, len(encodedString), 2) in Dart is to use a basic for loop with a start, condition, and increment: for (var i = 2; i < encodedString.length; i += 2).
In Python, int(string, 16) parses string as a hexadecimal number. In Dart, use int.parse(string, radix: 16).
In Python, chr(integer) creates a string from the specified code point. The equivalent in Dart is String.fromCharCode(integer).
Putting it all together:
String cfDecodeEmail(String encodedString) {
var r = int.parse(encodedString.substring(0, 2), radix: 16);
var email = [
for (var i = 2; i < encodedString.length; i += 2)
String.fromCharCode(
int.parse(encodedString.substring(i, i + 2), radix: 16) ^ r,
)
].join();
return email;
}
void main() {
// Prints: me#usamaejaz.com
print(cfDecodeEmail('543931142127353935313e352e7a373b39'));
}
Related
How would I put this in python so it can loop generate random with mask leading zeros?
std::string min = "000000000000000000000000000000000000000000000000000000000000000F";
std::string max = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140";
std::string get_random_ecdsa_key() {
while (true) {
std::string s = get_random_hex(64);
if (s >= min && s < max) {
return s;
}
}
}
import random
while True:
x = random.randint(0xF,0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140)
print hex (x) [2:66].lower()
The code you've shared is pretty trivial to convert into Python. The logic structure is the same, you just have to port the syntax. The only challenge is implementing get_random_hex. I don't know what it actually does since you didn't include that part in your question, but I assume it randomly generates a string that is a certain length and contains hex digits.
import random
def get_random_hex(n):
chars = "0123456789ABCDEF"
return "".join(random.choice(chars) for _ in range(n))
def get_random_ecdsa_key():
min = "000000000000000000000000000000000000000000000000000000000000000F"
max = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140"
while True:
s = get_random_hex(64)
if min <= s < max:
return s
print(get_random_ecdsa_key())
import random
min_bound = 0x000000000000000000000000000000000000000000000000000000000000000F
max_bound = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140
def get_random_ecdsa_key():
r = random.randint(min_bound,max_bound)
return "{:X}".format(r)
Here you go! The interesting parts are
random.randint(min, max) - returns an integer between min and max
return "{:X}".format(r) - just converts r from an integer to a hex string. The format string is equivalent to something like printf("%X", r) in C++
I want to implement below logic in c++ using python.
struct hash_string ///
{
hash_string() {}
uint32_t operator ()(const std::string &text) const
{
//std::cout << text << std::endl;
static const uint32_t primes[16] =
{
0x01EE5DB9, 0x491408C3, 0x0465FB69, 0x421F0141,
0x2E7D036B, 0x2D41C7B9, 0x58C0EF0D, 0x7B15A53B,
0x7C9D3761, 0x5ABB9B0B, 0x24109367, 0x5A5B741F,
0x6B9F12E9, 0x71BA7809, 0x081F69CD, 0x4D9B740B,
};
//std::cout << text.size() << std::endl;
uint32_t sum = 0;
for (size_t i = 0; i != text.size(); i ++) {
sum += primes[i & 15] * (unsigned char)text[i];
//std::cout << text[i] <<std::endl;
// std::cout << (unsigned char)text[i] << std::endl;
}
return sum;
}
};
python version is like this, which is not completed yet, since I haven't found a way to convert text to unsigned char. So, please help!
# -*- coding: utf-8 -*-
text = u'连衣裙女韩范'
primes = [0x01EE5DB9, 0x491408C3, 0x0465FB69, 0x421F0141,
0x2E7D036B, 0x2D41C7B9, 0x58C0EF0D, 0x7B15A53B,
0x7C9D3761, 0x5ABB9B0B, 0x24109367, 0x5A5B741F,
0x6B9F12E9, 0x71BA7809, 0x081F69CD, 0x4D9B740B]
//*text[i] does not work (of course), but how to mimic the logic above
rand = [primes[i & 15]***text[i]** for i in range(len(text))]
print rand
sum_agg = sum(rand)
print sum_agg
Take text=u'连衣裙女韩范' for example, c++ version returns 18 for text.size() and sum is 2422173716, while, in python, I don't know how to make it 18.
The equality of text size is essential, as a start at least.
Because you are using unicode, for an exact reproduction you will need to turn text in a series of bytes (chars in c++).
bytes_ = text.encode("utf8")
# when iterated over this will yield ints (in python 3)
# or single character strings in python 2
You should use more pythonic idioms for iterating over a pair of sequences
pairs = zip(bytes_, primes)
What if bytes_ is longer than primes? Use itertools.cycle
from itertools import cycle
pairs = zip(bytes_, cycle(primes))
All together:
from itertools import cycle
text = u'连衣裙女韩范'
primes = [0x01EE5DB9, 0x491408C3, 0x0465FB69, 0x421F0141,
0x2E7D036B, 0x2D41C7B9, 0x58C0EF0D, 0x7B15A53B,
0x7C9D3761, 0x5ABB9B0B, 0x24109367, 0x5A5B741F,
0x6B9F12E9, 0x71BA7809, 0x081F69CD, 0x4D9B740B]
# if python 3
rand = [byte * prime for byte, prime in zip(text.encode("utf8"), cycle(primes))]
# else if python 2 (use ord to convert single character string to int)
rand = [ord(byte) * prime for byte, prime in zip(text.encode("utf8"), cycle(primes))]
hash_ = sum(rand)
I need to calculate a checksum for a hex serial word string using XOR. To my (limited) knowledge this has to be performed using the bitwise operator ^. Also, the data has to be converted to binary integer form. Below is my rudimentary code - but the checksum it calculates is 1000831. It should be 01001110 or 47hex. I think the error may be due to missing the leading zeros. All the formatting I've tried to add the leading zeros turns the binary integers back into strings. I appreciate any suggestions.
word = ('010900004f')
#divide word into 5 separate bytes
wd1 = word[0:2]
wd2 = word[2:4]
wd3 = word[4:6]
wd4 = word[6:8]
wd5 = word[8:10]
#this converts a hex string to a binary string
wd1bs = bin(int(wd1, 16))[2:]
wd2bs = bin(int(wd2, 16))[2:]
wd3bs = bin(int(wd3, 16))[2:]
wd4bs = bin(int(wd4, 16))[2:]
#this converts binary string to binary integer
wd1i = int(wd1bs)
wd2i = int(wd2bs)
wd3i = int(wd3bs)
wd4i = int(wd4bs)
wd5i = int(wd5bs)
#now that I have binary integers, I can use the XOR bitwise operator to cal cksum
checksum = (wd1i ^ wd2i ^ wd3i ^ wd4i ^ wd5i)
#I should get 47 hex as the checksum
print (checksum, type(checksum))
Why use all this conversions and the costly string functions?
(I will answer the X part of your XY-Problem, not the Y part.)
def checksum (s):
v = int (s, 16)
checksum = 0
while v:
checksum ^= v & 0xff
v >>= 8
return checksum
cs = checksum ('010900004f')
print (cs, bin (cs), hex (cs) )
Result is 0x47 as expected. Btw 0x47 is 0b1000111 and not as stated 0b1001110.
s = '010900004f'
b = int(s, 16)
print reduce(lambda x, y: x ^ y, ((b>> 8*i)&0xff for i in range(0, len(s)/2)), 0)
Just modify like this.
before:
wd1i = int(wd1bs)
wd2i = int(wd2bs)
wd3i = int(wd3bs)
wd4i = int(wd4bs)
wd5i = int(wd5bs)
after:
wd1i = int(wd1bs, 2)
wd2i = int(wd2bs, 2)
wd3i = int(wd3bs, 2)
wd4i = int(wd4bs, 2)
wd5i = int(wd5bs, 2)
Why your code doesn't work?
Because you are misunderstanding int(wd1bs) behavior.
See doc here. So Python int function expect wd1bs is 10 base by default.
But you expect int function to treat its argument as 2 base.
So you need to write as int(wd1bs, 2)
Or you can also rewrite your entire code like this. So you don't need to use bin function in this case. And this code is basically same as #Hyperboreus answer. :)
w = int('010900004f', 16)
w1 = (0xff00000000 & w) >> 4*8
w2 = (0x00ff000000 & w) >> 3*8
w3 = (0x0000ff0000 & w) >> 2*8
w4 = (0x000000ff00 & w) >> 1*8
w5 = (0x00000000ff & w)
checksum = w1 ^ w2 ^ w3 ^ w4 ^ w5
print hex(checksum)
#'0x47'
And this is more shorter one.
import binascii
word = '010900004f'
print hex(reduce(lambda a, b: a ^ b, (ord(i) for i in binascii.unhexlify(word))))
#0x47
I'm trying to figure out how to mirror encryption/decryption from an existing C function over to python. However, in my tests of encrypting with C and decrypting with python, I can't figure out some elements around the key.
These were all code samples online, so I commented things like the base64 call in Python, and at this point I'm unsure on:
1) If I correctly determined the KEYBIT to KEY_SIZE/BLOCK_SIZE settings.
2) How to get from password to key in python to match the C code.
3) Am I missing any core conversion steps?
rijndael.h in C:
#define KEYLENGTH(keybits) ((keybits)/8)
#define RKLENGTH(keybits) ((keybits)/8+28)
#define NROUNDS(keybits) ((keybits)/32+6)
encrypting in C
#define KEYBITS 256
unsigned long rk[RKLENGTH(KEYBITS)];
unsigned char key[KEYLENGTH(KEYBITS)];
char *password = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA";
for (i = 0; i < sizeof(key); i++)
key[i] = *password != 0 ? *password++ : 0;
nrounds = rijndaelSetupEncrypt(rk, key, 256);
count = 0;
while (count < strlen(input)) {
unsigned char ciphertext[16];
unsigned char plaintext[16];
for (i = 0; i < sizeof(plaintext); i++) {
if (count < strlen(input))
plaintext[i] = input[count++];
else
plaintext[i] = 0;
}
rijndaelEncrypt(rk, nrounds, plaintext, ciphertext);
if (fwrite(ciphertext, sizeof(ciphertext), 1, output) != 1)
fclose(file);
fputs("File write error", stderr);
return 0;
}
}
Decrypt in Python
KEY_SIZE = 32
BLOCK_SIZE = 16
def decrypt(password, filename):
#
# I KNOW THIS IS WRONG, BUT HOW DO I CONVERT THE PASSWD TO KEY?
#
key = password
padded_key = key.ljust(KEY_SIZE, '\0')
#ciphertext = base64.b64decode(encoded)
ciphertext = file_get_contents(filename);
r = rijndael(padded_key, BLOCK_SIZE)
padded_text = ''
for start in range(0, len(ciphertext), BLOCK_SIZE):
padded_text += r.decrypt(ciphertext[start:start+BLOCK_SIZE])
plaintext = padded_text.split('\x00', 1)[0]
return plaintext
Thanks!
The example C code just copies 32 bytes from the password string into the key. If the key is less than 32 bytes, it padds on the right with zeroes.
Translated into python, this would be:
key = password[:32]+b'\x00'*(32-len(password))
Which actually produces the same result as
password.ljust(32, '\0')
You should note however that this method of generating keys is considerd extremely unsafe. If the attacker suspects that the key consists of ASCII characters padded with 0 bytes, the keyspace (amount of possible keys) is reduced considerably. If the key is made out of random bytes, there are 256^32 = 1.15e77 keys. If the key e.g. begins with 8 ASCII characters followed by zeroes, there are only (127-32)^8 = 6.63e15 possible keys. And since there are dictionaries out there with often-used passwords, the attacker probably wouldn't have to exhaust this reduced keyspace; he would try the relatively small dictionaries first.
Consider using a cryptographic hash function or another proper key derivation function to convert the passphrase into a key.
Try using the pycrypto toolkit. It implements Rijndael/AES and other ciphers.
I'm working on making a URL shortener for my site, and my current plan (I'm open to suggestions) is to use a node ID to generate the shortened URL. So, in theory, node 26 might be short.com/z, node 1 might be short.com/a, node 52 might be short.com/Z, and node 104 might be short.com/ZZ. When a user goes to that URL, I need to reverse the process (obviously).
I can think of some kludgy ways to go about this, but I'm guessing there are better ones. Any suggestions?
ASCII to int:
ord('a')
gives 97
And back to a string:
in Python2: str(unichr(97))
in Python3: chr(97)
gives 'a'
>>> ord("a")
97
>>> chr(97)
'a'
If multiple characters are bound inside a single integer/long, as was my issue:
s = '0123456789'
nchars = len(s)
# string to int or long. Type depends on nchars
x = sum(ord(s[byte])<<8*(nchars-byte-1) for byte in range(nchars))
# int or long to string
''.join(chr((x>>8*(nchars-byte-1))&0xFF) for byte in range(nchars))
Yields '0123456789' and x = 227581098929683594426425L
What about BASE58 encoding the URL? Like for example flickr does.
# note the missing lowercase L and the zero etc.
BASE58 = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'
url = ''
while node_id >= 58:
div, mod = divmod(node_id, 58)
url = BASE58[mod] + url
node_id = int(div)
return 'http://short.com/%s' % BASE58[node_id] + url
Turning that back into a number isn't a big deal either.
Use hex(id)[2:] and int(urlpart, 16). There are other options. base32 encoding your id could work as well, but I don't know that there's any library that does base32 encoding built into Python.
Apparently a base32 encoder was introduced in Python 2.4 with the base64 module. You might try using b32encode and b32decode. You should give True for both the casefold and map01 options to b32decode in case people write down your shortened URLs.
Actually, I take that back. I still think base32 encoding is a good idea, but that module is not useful for the case of URL shortening. You could look at the implementation in the module and make your own for this specific case. :-)
apparently I'm late to the party, jus like to share a snippet I use very often.
/**
* 62 = 26 + 26 +10
*
* #param id
* #return
*/
public String base62(long id) {
StringBuilder sb = new StringBuilder();
while (id >= 62) {
int remainer = (int) (id % 62);
id = id / 62;
sb.append(index2char(remainer));
}
sb.append(index2char(id));
return sb.reverse().toString();
}
public long reverseBase62(String s) {
long r = 0;
for (int i = 0; i < s.length(); i++) {
r = r * 62;
int index = char2index(s.charAt(i));
if (index == -1) {
throw new IllegalArgumentException(
String.format("[%s] is in malformation, should only contain 0~9, a~z, A~Z", s));
}
r += index;
}
return r;
}
private char index2char(long index) {
if (index < 10) {
return (char) ('0' + index);
}
if (index < 36) {
return (char) ('a' + index - 10);
}
return (char) ('A' + index - 36);
}
private int char2index(char c) {
if ('0' <= c && c <= '9') {
return c - '0';
}
if ('a' <= c && c <= 'z') {
return c - 'a' + 10;
}
if ('A' <= c && c <= 'Z') {
return c - 'A' + 36;
}
return -1;
}