Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is there a reason why there is no websocket lib, based on asyncio, in Python core?
It sounds relevant for me as websockets are now a standard, supported by all modern browsers and asyncio is in the good way to be the new standard to handle concurrency un Python.
Note: I do not try to solve any problem, this is only to enrich my general knowledge.
I would say that aiohttp evolves much faster than Python itself: we make several aiohttp releases per year while Python is released every year and half.
That's why aiohttp will never be a part of Python standard library.
Note, the same situation is for requests and other HTTP frameworks like Django.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I am reading the book "Advanced Programming in the UNIX Environment"
This library is important because all contemporary UNIX systems, such as the ones described in this book, provide the library routines that are speciļ¬ed in the C standard.
I am very confused here about the word routine Coroutine - Wikipedia
Does it has any relations to coroutines?
No.
A "routine" is a series of instructions. Similar to a "function" or "program". The word "routine" is somewhat archaic (but "coroutine" is not).
Library routines means library functions. It has nothing to do with coroutines.
From my understand, coroutine is a specific efficient manipulation way when we handle a routine. the other ways include: multi-process and multi-thread as you may be familiar with.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have used scarpy to scrap some text from a website. But I am not quite sure how to store them in sqlite?Can anyone help me with the code?
while you can find some examples that are using blocking operations to interact with the database it is worth noting that scrapy is built on top of twisted library, meaning that in its core there is only a single thread with a single loop for all operations, so when you do something like:
self.cursor.execute(...)
the entire system is waiting for a response from the database, including http requests that are waiting to be executed etc.
having said that, I suggest you'll check this code snippet https://github.com/riteshk/sc/blob/master/scraper/pipelines.py
using twisted.enterprise.adbapi.ConnectionPool is a little more complex than a simple blocking database access code but it plays well with the way scrapy uses io operations
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to create API for my project and learn Ruby by the way. Earlier I created APIs in Flask-RESTful and I looking for something simple as Flask-RESTful for Ruby.
I'm just started with Ruby so I have no knowledge what would be the best.
Flask-RESTful allows you to do many things automatically and creating useful, small API takes 15 minutes and 50 lines of code. I'll find something similar for Ruby ?
Or maybe I should guided by other criteria ?
Check out Sinatra. It's a Ruby framework quite similar to Flask.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm new to web development and going to make a website which responses with data received from request to web-service(facebook for e.g.) and how to choose what is more useful here:
nodejs has an callback model which allows not to wait while gathering data for user from other services (but i've broken my fingers and my brain after trying to make a kind of class in javascript with inheritance and the whole server drops after unhandled error in script)
python is very convinient in working with diff. kinds of data, it's more convinient for me, former C++ developer
yesterday i've read about twisted python that also uses callbacks
Help me please to choose what to use, better - performance, simple code
The callback model might make your code more verbose but WAIT! there is a solution! Check out
waitfor.
Anyway, if it's a personal project then no one is forcing you to use node.js for webapp development.You should go with what makes you more comfortable. If you like developing in python then go for it! :)
why don't you try django; it uses python (which you said is more convenient) and is also very commonly used for web development.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
So, we have had this: The 1000% Speedup, or, the stdlib sucks. It demonstrates a rather bad bug that is probably costing the universe a load of cycles even as we speak. It's fixed now, which is great.
So what parts of the standard library have you noticed to be evil?
I would expect all the responsible people to match up an answer with a bug report (if suitable) and a patch (if superman).
The rexec module has so many security holes in it that it's almost useless.
(since this is a different module, placing it in a different answer)
cgitb has some weird threading issues. See this bug report.