I am trying to read a tgz file and writing it to couchdb.
here is the code.
import couchdb
conn = couchdb.Server('http://localhost:5984')
db = conn['test']
with open('/tmp/test.txt.tgz.enc') as f:
data = f.read()
doc = {'file': data}
db.save(doc)
it fails with
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/usr/local/lib/python2.7/dist-packages/couchdb/client.py", line 407, in save
_, _, data = func(body=doc, **options)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 399, in post_json
status, headers, data = self.post(*a, **k)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 381, in post
**params)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 419, in _request
credentials=self.credentials)
File "/usr/local/lib/python2.7/dist-packages/couchdb/http.py", line 176, in request
body = json.encode(body).encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 11: ordinal not in range(128)
still googling around to find a solution myself.
alright I solved it. double checked the documentation and there is a put_attachment function but it requires a document to be created upfront you will assign the attachment to.
code example just if somebody else needs it:
import couchdb
conn = couchdb.Server('http://localhost:5984')
db = conn['test1']
doc = {'name': 'testfile'}
db.save(doc)
db.put_attachment(doc, data, filename="test.txt.tgz")
k i got it.See this below example db=couch.create('test1')-This is to create a database name with test1.doc={'name':'testfile'} -This is key value pair.f=open('/home/yamunapriya/pythonpractices/addd.py','r')-This is to open the file with read mode.db.save(doc)-to save the file couchdb.db.put_attachment(doc,f,filename="/home/yamunapriya/pythonpractices/addd.py") -in this the parameter doc-key value pair,f-filename/path with read/write mode,filename
import couchdb
couch=couchdb.Server()
db=couch.create('test1')
doc={'name':'testfile'}
f=open('/home/yamunapriya/pythonpractices/addd.py','r')
db.save(doc)
db.put_attachment(doc,f,filename="/home/yamunapriya/pythonpractices/addd.py")
Related
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x99 in position 0: invalid start byte while I tried to start a Flask Server.
The following is the code
from flask import Flask
app = Flask(__name__)
app.run(debug=True, port=5000)
This generates the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nero/.local/lib/python3.10/site-packages/flask/app.py", line 1142, in run
cli.load_dotenv()
File "/home/nero/.local/lib/python3.10/site-packages/flask/cli.py", line 709, in load_dotenv
dotenv.load_dotenv(path, encoding="utf-8")
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 332, in load_dotenv
return dotenv.set_as_environment_variables()
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 90, in set_as_environment_variables
for k, v in self.dict().items():
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 74, in dict
self._dict = OrderedDict(resolve_variables(raw_values, override=self.override))
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 222, in resolve_variables
for (name, value) in values:
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 82, in parse
for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
File "/usr/lib/python3/dist-packages/dotenv/main.py", line 24, in with_warn_for_invalid_lines
for mapping in mappings:
File "/usr/lib/python3/dist-packages/dotenv/parser.py", line 180, in parse_stream
reader = Reader(stream)
File "/usr/lib/python3/dist-packages/dotenv/parser.py", line 71, in __init__
self.string = stream.read()
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x99 in position 0: invalid start byte
This is a bare code and it should not generate any errors but it does,
Attaching a screenshot for reference
My Environment settings are
Python 3.10.6
Ubuntu 22.04 - Linux [Tested on a Windows machine also]
Flask 2.2.2
Thanks in Advance
NB :
This is not a platform specific issue, Tried on Linux and Windows. Tried in python shell and tried execting as python file.
This not even related to crypto issues, There could be other questions with same heading but they aren not related to Flask
I tried to run a flask server with default configurations.
Expecting to run Flask Server
If you are here with the same issue Here is the solution
I assumed it had nothing to do with the code I wrote, as it is barely anything to make errors.
Reading through the Error traceback, I saw it is related to dotenv which in turn is a python package that deals with environment variables and secrets.
Creating a bare .env file in root solved the problem
I can't find a way of reading the Minecraft world files in a way that i could use in python
I've looked around the internet but can find no tutorials and only a few libraries that claim that they can do this but never actually work
from nbt import *
nbtfile = nbt.NBTFile("r.0.0.mca",'rb')
I expected this to work but instead I got errors about the file not being compressed or something of the sort
Full error:
Traceback (most recent call last):
File "C:\Users\rober\Desktop\MinePy\MinecraftWorldReader.py", line 2, in <module>
nbtfile = nbt.NBTFile("r.0.0.mca",'rb')
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 628, in __init__
self.parse_file()
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 652, in parse_file
type = TAG_Byte(buffer=self.file)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 99, in __init__
self._parse_buffer(buffer)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\site-packages\nbt\nbt.py", line 105, in _parse_buffer
self.value = self.fmt.unpack(buffer.read(self.fmt.size))[0]
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 276, in read
return self._buffer.read(size)
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 463, in read
if not self._read_gzip_header():
File "C:\Users\rober\AppData\Local\Programs\Python\Python36-32\lib\gzip.py", line 411, in _read_gzip_header
raise OSError('Not a gzipped file (%r)' % magic)
OSError: Not a gzipped file (b'\x00\x00')
Use anvil parser. (Install with pip install anvil-parser)
Reading
import anvil
region = anvil.Region.from_file('r.0.0.mca')
# You can also provide the region file name instead of the object
chunk = anvil.Chunk.from_region(region, 0, 0)
# If `section` is not provided, will get it from the y coords
# and assume it's global
block = chunk.get_block(0, 0, 0)
print(block) # <Block(minecraft:air)>
print(block.id) # air
print(block.properties) # {}
https://pypi.org/project/anvil-parser/
According to this page, the .mca files is not totally kind of of NBT file. It begins with an 8KiB header which includes the offsets of chunks in the region file itself and the timestamps for the last updates of those chunks.
I recommend you to see the offical announcement and this page for more information.
I would like to have such call:
pv -ptebar compressed.csv.gz | python my_script.py
Inside my_script.py I would like to decompress compressed.csv.gz and parse it using Python csv parser. I would expect something like this:
import csv
import gzip
import sys
with gzip.open(fileobj=sys.stdin, mode='rt') as f:
reader = csv.reader(f)
print(next(reader))
print(next(reader))
print(next(reader))
Of course it doesn't work because gzip.open doesn't have fileobj argument. Could you provide some working example solving this issue?
UPDATE
Traceback (most recent call last):
File "my_script.py", line 8, in <module>
print(next(reader))
File "/usr/lib/python3.5/gzip.py", line 287, in read1
return self._buffer.read1(size)
File "/usr/lib/python3.5/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.5/gzip.py", line 461, in read
if not self._read_gzip_header():
File "/usr/lib/python3.5/gzip.py", line 404, in _read_gzip_header
magic = self._fp.read(2)
File "/usr/lib/python3.5/gzip.py", line 91, in read
self.file.read(size-self._length+read)
File "/usr/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
The traceback above appeared after applying #Rawing advice.
In python 3.3+, you can pass a file object to gzip.open:
The filename argument can be an actual filename (a str or bytes object), or an existing file object to read from or write to.
So your code should work if you just omit the fileobj=:
with gzip.open(sys.stdin, mode='rt') as f:
Or, a slightly more efficient solution:
with gzip.open(sys.stdin.buffer, mode='rb') as f:
If for some odd reason you're using a python older than 3.3, you can directly invoke the gzip.GzipFile constructor. However, these old versions of the gzip module didn't have support for files opened in text mode, so we'll use sys.stdin's underlying buffer instead:
with gzip.GzipFile(fileobj=sys.stdin.buffer) as f:
Using gzip.open(sys.stdin.buffer, 'rt') fixes issue for Python 3.
I am attempting to use a Webservice created by one of our developers that allows us to upload files into the system, within certain restrictions.
Using SUDS, I get the following information:
Suds ( https://fedorahosted.org/suds/ ) version: 0.4 GA build: R699-20100913
Service ( ConnectToEFS ) tns="http://tempuri.org/"
Prefixes (3)
ns0 = "http://schemas.microsoft.com/2003/10/Serialization/"
ns1 = "http://schemas.microsoft.com/Message"
ns2 = "http://tempuri.org/"
Ports (1):
(BasicHttpBinding_IConnectToEFS)
Methods (2):
CreateContentFolder(xs:string FileCode, xs:string FolderName, xs:string ContentType, xs:string MetaDataXML, )
UploadFile(ns1:StreamBody FileByteStream, )
Types (4):
ns1:StreamBody
ns0:char
ns0:duration
ns0:guid
My method to using UploadFile is as follows:
def webserviceUploadFile(self, targetLocation, fileName, fileSource):
fileSource = './test_files/' + fileSource
ntlm = WindowsHttpAuthenticated(username=uname, password=upass)
client = Client(webservice_url, transport=ntlm)
client.set_options(soapheaders={'TargetLocation':targetLocation, 'FileName': fileName})
body = client.factory.create('AIRDocument')
body_file = open(fileSource, 'rb')
body_data = body_file.read()
body.FileByteStream = body_data
return client.service.UploadFile(body)
Running this gets me the following result:
Traceback (most recent call last):
File "test_cases.py", line 639, in test_upload_file_invalid_extension
result_string = self.HM.webserviceUploadFile('9999', 'AD-1234-5424__44.exe',
'test_data.pdf')
File "test_cases.py", line 81, in webserviceUploadFile
return client.service.UploadFile(body)
File "build\bdist.win32\egg\suds\client.py", line 542, in __call__
return client.invoke(args, kwargs)
File "build\bdist.win32\egg\suds\client.py", line 595, in invoke
soapenv = binding.get_message(self.method, args, kwargs)
File "build\bdist.win32\egg\suds\bindings\binding.py", line 120, in get_message
content = self.bodycontent(method, args, kwargs)
File "build\bdist.win32\egg\suds\bindings\document.py", line 63, in bodycontent
p = self.mkparam(method, pd, value)
File "build\bdist.win32\egg\suds\bindings\document.py", line 105, in mkparam
return Binding.mkparam(self, method, pdef, object)
File "build\bdist.win32\egg\suds\bindings\binding.py", line 287, in mkparam
return marshaller.process(content)
File "build\bdist.win32\egg\suds\mx\core.py", line 62, in process
self.append(document, content)
File "build\bdist.win32\egg\suds\mx\core.py", line 75, in append
self.appender.append(parent, content)
File "build\bdist.win32\egg\suds\mx\appender.py", line 102, in append
appender.append(parent, content)
File "build\bdist.win32\egg\suds\mx\appender.py", line 243, in append
Appender.append(self, child, cont)
File "build\bdist.win32\egg\suds\mx\appender.py", line 182, in append
self.marshaller.append(parent, content)
File "build\bdist.win32\egg\suds\mx\core.py", line 75, in append
self.appender.append(parent, content)
File "build\bdist.win32\egg\suds\mx\appender.py", line 102, in append
appender.append(parent, content)
File "build\bdist.win32\egg\suds\mx\appender.py", line 198, in append
child.setText(tostr(content.value))
File "build\bdist.win32\egg\suds\sax\element.py", line 251, in setText
self.text = Text(value)
File "build\bdist.win32\egg\suds\sax\text.py", line 43, in __new__
result = super(Text, cls).__new__(cls, *args, **kwargs)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 10: ordinal
not in range(128)
After much research and talking with the developer of the webservice, I modified the body_data = body_file.read() into body_data = body_file.read().decode("UTF-8") which gets me this error:
Traceback (most recent call last):
File "test_cases.py", line 639, in test_upload_file_invalid_extension
result_string = self.HM.webserviceUploadFile('9999', 'AD-1234-5424__44.exe', 'test_data.pdf')
File "test_cases.py", line 79, in webserviceUploadFile
body_data = body_file.read().decode("utf-8")
File "C:\python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe2 in position 10: invalid
continuation byte
Which is less than helpful.
After more research into the problem, I tried adding 'errors='ignore'' to the UTF-8 encode, and this was the result:
<TransactionDescription>Error in INTL-CONF_France_PROJ_MA_126807.docx: An exception has been thrown when reading the stream.. Inner Exception: System.Xml.XmlException: The byte 0x03 is not valid at this location. Line 1, position 318.
at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)
at System.Xml.XmlUTF8TextReader.Read()
at System.ServiceModel.Dispatcher.StreamFormatter.MessageBodyStream.Exhaust(XmlDictionaryReader reader)
at System.ServiceModel.Dispatcher.StreamFormatter.MessageBodyStream.Read(Byte[] buffer, Int32 offset, Int32 count). Source: System.ServiceModel</TransactionDescription>
Which pretty much stumps me on what to do. Based on the result stack trace by the webservice, it looks like it wants UTF-8 but I can't seem to get it to the webservice without Python or SUDS throwing a fit, or by ignoring problems in the encoding. The system I'm working on only takes in MicroSoft office type files (doc, xls, and the like), PDFs, and TXT files, so using something that I have more control on the encoding is not an option. I also tried detecting the encoding used by the sample PDF and the sample DOCX, but using what it suggested (Latin-1, ISO8859-x, and several windows XXXX) all were accepted by Python and SUDS, but not by the webservice.
Also note in the example shown, its most frequently referencing a test to an invalid extension. This error applies even in what should be a test of the successful upload, which is the only time really that the final stacktrace ever shows up.
You can use this base64.b64encode(body_file.read()) and this will return the base64 string value. So your request variable must be a string.
Is there a way to store unicode data with App Engine's BlobStore (in Python)?
I'm saving the data like this
file_name = files.blobstore.create(mime_type='application/octet-stream')
with files.open(file_name, 'a') as f:
f.write('<as><a>' + '</a><a>'.join(stringInUnicode) + '</a></as>')
But on the production (not development) server I'm getting this error. It seems to be converting my Unicode into ASCII and I don't know why.
Why is it trying to convert back to ASCII? Can I avoid this?
Traceback (most recent call last):
File "/base/data/home/apps/myapp/1.349473606437967000/myfile.py", line 137, in get
f.write('<as><a>' + '</a><a>'.join(stringInUnicode) + '</a></as>')
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 364, in write
self._make_rpc_call_with_retry('Append', request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 472, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 226, in _make_call
rpc.make_call(method, request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 509, in make_call
self.__rpc.MakeCall(self.__service, method, request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 115, in MakeCall
self._MakeCallImpl()
File "/base/python_runtime/python_lib/versions/1/google/appengine/runtime/apiproxy.py", line 161, in _MakeCallImpl
self.request.Output(e)
File "/base/python_runtime/python_lib/versions/1/google/net/proto/ProtocolBuffer.py", line 204, in Output
self.OutputUnchecked(e)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file_service_pb.py", line 2390, in OutputUnchecked
out.putPrefixedString(self.data_)
File "/base/python_runtime/python_lib/versions/1/google/net/proto/ProtocolBuffer.py", line 432, in putPrefixedString
v = str(v)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 313: ordinal not in range(128)
A BLOB store contains binary data: bytes, not characters. So you're going to have to do an encode step of some sort. utf-8 seems as good an encoding as any.
f.write('<as><a>' + '</a><a>'.join(stringInUnicode) + '</a></as>')
This will go wrong if an item in stringInUnicode contains <, & or ]]> sequences. You'll want to do some escaping (either using a proper XML library to serialise the data, or manually):
with files.open(file_name, 'a') as f:
f.write('<as>')
for line in stringInUnicode:
line= line.replace(u'&', u'&').replace(u'<', u'<').replace(u'>', u'>');
f.write('<a>%s</a>' % line.encode('utf-8'))
f.write('</as>')
(This will still be ill-formed XML if the strings ever include control characters, but there's not so much you can do about that. If you need to store arbitrary binary in XML you'd need some ad-hoc encoding such as base-64 on top.)