I saved a socket variable to a CSV file, and it turns into a string (for example, <socket.socket fd=1608, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 5693), raddr=('127.0.0.1', 56052)>) so I can't use it now.
Is there any way to convert it back into a socket after reading this string from the CSV file?
Or, is there a way to save the socket to the CSV file without it turning into a string to begin with?
I tried using literal_eval from ast, but to no avail.
Related
I am trying to create a program that will open a port on the local machine and let others connect into it via netcat. My current code is.
s = socket.socket()
host = '127.0.0.1'
port = 12345
s.bind((host, port))
s.listen(5)
while True:
c, addr = s.accept()
print('Got connection from', addr)
c.send('Thank you for connecting')
c.close()
I am new to Python and sockets. But when I run this code it will allow me to send a netcat connection with the command:
nc 127.0.0.1 12345
But then on my Python script I get the error for the c.send:
TypeError: a bytes-like object is required, not 'str'
I am basically just trying to open a port, allow netcat to connect and have a full shell on that machine.
The reason for this error is that in Python 3, strings are Unicode, but when transmitting on the network, the data needs to be bytes instead. So... a couple of suggestions:
Suggest using c.sendall() instead of c.send() to prevent possible issues where you may not have sent the entire msg with one call (see docs).
For literals, add a 'b' for bytes string: c.sendall(b'Thank you for connecting')
For variables, you need to encode Unicode strings to byte strings (see below)
Best solution (should work w/both 2.x & 3.x):
output = 'Thank you for connecting'
c.sendall(output.encode('utf-8'))
Epilogue/background: this isn't an issue in Python 2 because strings are bytes strings already -- your OP code would work perfectly in that environment. Unicode strings were added to Python in releases 1.6 & 2.0 but took a back seat until 3.0 when they became the default string type. Also see this similar question as well as this one.
You can decode it to str with receive.decode('utf_8').
You can change the send line to this:
c.send(b'Thank you for connecting')
The b makes it bytes instead.
An alternative solution is to introduce a method to the file instance that would do the explicit conversion.
import types
def _write_str(self, ascii_str):
self.write(ascii_str.encode('ascii'))
source_file = open("myfile.bin", "wb")
source_file.write_str = types.MethodType(_write_str, source_file)
And then you can use it as source_file.write_str("Hello World").
Which would be the best way to receive a big list through TCP Sockets ?
My code looks like this.
When u have to receive a big list, that doesn't work obviously.
print 'connection from', client_address
while True:
try:
data = pickle.loads(connection.recv(8192))
except EOFError:
print 'no more data from', client_address
break
The best way to do this is to transform the socket into a file object. You can do this with connection.makefile(). Then, instead of calling pickle.loads() -- which expects a complete byte string containing the entire pickled object -- call pickle.load(connection.makefile()). This way, you let the pickle module handle reading the entire "file". It will call the object's read function repeatedly until it has received all the expected data.
That basically assumes that the entire remaining part of the stream is to be unpickled (which appears to be what you're trying to do so it sounds like it should work). Otherwise, you may need to wrap your own pseudo-file-object around the stream that has some independent knowledge of the end of the pickled object.
I want to send a file with python ftplib, from one ftp site to another, so to avoid file read/write processees.
I create a BytesIO stream:
myfile=BytesIO()
And i succesfully retrieve a image file from ftp site one with retrbinary:
ftp_one.retrbinary('RETR P1090080.JPG', myfile.write)
I can save this memory object to a regular file:
fot=open('casab.jpg', 'wb')
fot=myfile.readvalue()
But i am not able to send this stream via ftp with storbinary. I thought this would work:
ftp_two.storbinary('STOR magnafoto.jpg', myfile.getvalue())
But doesnt. i get a long error msg ending with 'buf = fp.read(blocksize)
AttributeError: 'str' object has no attribute 'read'
I also tried many absurd combinations, but with no success. As an aside, I am also quite puzzled with what I am really doing with myfoto.write. Shouldnt it be myfoto.write() ?
I am also quite clueless to what this buffer thing does or require. Is what I want too complicated to achieve? Should I just ping pong the files with an intermediate write/read in my system? Ty all
EDIT: thanks to abanert I got things straight. For the record, storbinary arguments were wrong and a myfile.seek(0) was needed to 'rewind' the stream before sending it. This is a working snippet that moves a file between two ftp addresses without intermediate physical file writes:
import ftplib as ftp
from io import BytesIO
ftp_one=ftp.FTP(address1, user1, pass1)
ftp_two=ftp.FTP(address2, user2, pass2)
myfile=BytesIO()
ftp_one.retrbinary ('RETR imageoldname.jpg', myfile.write)
myfile.seek(0)
ftp_two.storbinary('STOR imagenewname.jpg', myfile)
ftp_one.close()
ftp_two.close()
myfile.close()
The problem is that you're calling getvalue(). Just don't do that:
ftp_two.storbinary('STOR magnafoto.jpg', myfile)
storbinary requires a file-like object that it can call read on.
Fortunately, you have just such an object, myfile, a BytesIO. (It's not entirely clear from your code what the sequence of things is here—if this doesn't work as-is, you may need to myfile.seek(0) or create it in a different mode or something. But a BytesIO will work with storbinary unless you do something wrong.)
But instead of passing myfile, you pass myfile.getvalue(). And getvalue "Returns bytes containing the entire contents of the buffer."
So, instead of giving storbinary a file-like object that it can call read on, you're giving it a bytes object, which is of course the same as str in Python 2.x, and you can't call read on that.
For your aside:
As an aside, I am also quite puzzled with what I am really doing with myfoto.write. Shouldnt it be myfoto.write() ?
Look at the docs. The second parameter isn't a file, it's a callback function.
The callback function is called for each block of data received, with a single string argument giving the data block.
What you want is a function that appends each block of data to the end of myfoto. While you could write your own function to do that:
def callback(block_of_data):
myfoto.write(block_of_data)
… it should be pretty obvious that this function does exactly the same thing as the myfoto.write method. So, you can just pass that method itself.
If you don't understand about bound methods, see Method Objects in the tutorial.
This flexibility, as weird as it seems, lets you do something even better than downloading the whole file into a buffer to send to another server. You can actually open the two connections at the same time, and use callbacks to send each buffer from the source server to the destination server as it's received, without ever storing anything more than one buffer.
But, unless you really need that, you probably don't want to go through all that complexity.
In fact, in general, ftplib is kind of low-level. And it has some designs (like the fact that storbinary takes a file, while retrbinary takes a callback) that make total sense at that low level but seem very odd from a higher level. So, you may want to look at some of the higher-level libraries by doing a search at PyPI.
I have a file which contains raw IP packets in binary form. The data in the file contains a full IP header, TCP\UDP header, and data. I would like to use any language (preferably python) to read this file and dump the data onto the line.
In Linux I know you can write to some devices directly (echo "DATA" > /dev/device_handle). Would using python to do an open on /dev/eth1 achieve the same effect (i.e. could I do echo "DATA" > /dev/eth1)
Something like:
#!/usr/bin/env python
import socket
s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW)
s.bind(("ethX", 0))
blocksize = 100;
with open('filename.txt') as fh:
while True:
block = fh.read(blocksize)
if block == "": break #EOF
s.send(block)
Should work, haven't tested it however.
ethX needs to be changed to your interface (e.g. eth1, eth2, wlan1, etc.)
You may want to play around with blocksize. 100 bytes at a time should be fine, you may consider going up but I'd stay below the 1500 byte Ethernet PDU.
It's possible you'll need root/sudoer permissions for this. I've needed them before when reading from a raw socket, never tried simply writing to one.
This is provided that you literally have the packet (and only the packet) dumped to file. Not in any sort of encoding (e.g. hex) either. If a byte is 0x30 it should be '0' in your text file, not "0x30", "30" or anything like that. If this is not the case you'll need to replace the while loop with some processing, but the send is still the same.
Since I just read that you're trying to send IP packets -- In this case, it's also likely that you need to build the entire packet at once, and then push that to the socket. The simple while loop won't be sufficient.
No; there is no /dev/eth1 device node -- network devices are in a different namespace from character/block devices like terminals and hard drives. You must create an AF_PACKET socket to send raw IP packets.
We have a device receiving 802.11p MAC frames from the air and feeding them to the serial port completely unchanged (no network layer headers) and we'd like to see them arranged in Wireshark, so we can have a sort of self made sniffer for this 802.11p protocol.
My approach (in linux with python) was, open the serial port, read the frames and write them to a named pipe which wireshark would be listening to. After a lot of searching I've found out that the format i have to write into that pipe has to be like the pcap files format. I've looked to some python modules that do pcap formatting (scapy, pcapy, dpkt), but i can't find any that gets a pure MAC frame and simply writes it to a file in the pcap format in a way that wireshark can read and without me having to do all the parsing. What is your suggestion?
How about just creating a tap device and writing the frames to that? Then you can sniff the tap device with wireshark just like any other device. There's an example using a tap device in Python here, and a longer tutorial (actually about tun devices) in C here.
NB: I haven't tested this, but the idea seems reasonable...
UPDATE: This seems to work. It's based on the above gist, but
simply reads frame data from a file and writes it to the device:
import sys
import fcntl
import os
import struct
import subprocess
TUNSETIFF = 0x400454ca
TUNSETOWNER = TUNSETIFF + 2
IFF_TUN = 0x0001
IFF_TAP = 0x0002
IFF_NO_PI = 0x1000
# Open TUN device file.
tun = open('/dev/net/tun', 'r+b')
# Tell it we want a TUN device named lars0.
ifr = struct.pack('16sH', 'lars0', IFF_TAP | IFF_NO_PI)
fcntl.ioctl(tun, TUNSETIFF, ifr)
# Optionally, we want it be accessed by the normal user.
fcntl.ioctl(tun, TUNSETOWNER, 1000)
# Bring it up and assign addresses.
subprocess.check_call(['ifconfig', 'lars0', 'up'])
print 'waiting'
sys.stdin.readline()
# Read an IP packet been sent to this TUN device.
packet = list(open('/tmp/packet.raw').read())
# Write the reply packet into TUN device.
os.write(tun.fileno(), ''.join(packet))
print 'waiting'
sys.stdin.readline()