Python chat reading and writing simultaneously [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to build a client side chat which can send and read messages simultaneously.
One problem is that when I write a message, if someone else sends something its disrupt the message I am writing.
Another problem is the raw_input which blocks the user from reading new messages.
I tried to fix this problem by using msvcrt which causes another problem (I cant see the message I am writing and edit it).
How can I fix those 3 problem?
===>edit: Without using threads.

I think you may need asynchronous sockets...that will give you ability to handle sending and receiving in a single thread.
Look here for asynchronous sockets in python. This will let you code it "bare bones" (i.e. keep most of your code and just use the sockets).
Another option is to use Twisted. This has some complications, it is a complete framework, but it gives you a lot of lift.
You can also try multi-threading. This is not trivial to do, however.

Related

How do I set a dataflow window that will continually retrigger for more data after all records have been written to bigquery? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
We have a streaming pipeline reading from pub/sub and writing to bigquery. It wasn't working without adding a window function, because a default global window only fires once and doesn't know when to re-trigger. There is no GroupBy or combine.
We tried to add a beam Window with a trigger, but there are some problems. If we use a globalWindow, it runs really slow and sometimes gives null pointer exceptions. If we use a fixed window, it's fast but but it doesn't seem to acknowledge the pub/sub messages sometimes.
What we'd really want is a pipeline that reads from pub/sub, gets a batch of however many it could get, writes to bigquery, and once everything is written and the pubsub messages are acknowledged, retrigger the read-from-pubsub. Is this possible?
I think you are looking for this. You have a composite trigger named Repeatedly.forever and you can combine it with AfterCount
Something like this where you trigger after 1000 elements read.
Repeatedly.forever(AfterCount(1000))

Viewing and managing socket usage in Python [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to create a utility using Python (3) that, among other things, needs to look at and manage socket usage ON WINDOWS (no, I really don't care if it works on other OS's).
Looking at socket usage: To be clear, I don't want to create a socket or bind to an existing one, I want to be able to get a full list of what sockets are open and what programs have opened them. If you're not sure about what I mean, take a look at TCPView, which does exactly what I'm talking about.
Managing socket usage: Basically, I want to be able to stop programs from connecting from the internet, if necessary. I would assume that the easiest way to do this is to use os.system() to add a new rule to the Windows Firewall, but as that doesn't seem too elegant I'm open to suggestions.
As that's obviously not all the utility will do, I would prefer a library/module of some sort over a 3rd-party program.
You can launch the command "netstat -nabo" to get the list of all active connections & parse the output to get the source, destination, process name & ID. There is no straight forward method to get the active connections in python. You can also get the same information from python invoking iphlpapi. To block or allow a connection windows has command line to add/remove rule from windows firewall.

hook file creation in python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a program, where in some part of it, it needs to listen to the OP for when files are created/saved, so I can work on said file.
I know the basic concept of hooking, but I don't know exactly how to implement it in this specific use(I know how to attach a hook to a specific PID, but here I need to listen to all processes and see if one of them is creating a file).
I'm using pydbg for my hooking needs, but if your answer uses something different, feel free to still answer.
Thanks :)
It seems you need something like watchdog, pyinotify or python-inotify. You can also see this SO question for other options.

Storing scraped data into the database sqlite [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have used scarpy to scrap some text from a website. But I am not quite sure how to store them in sqlite?Can anyone help me with the code?
while you can find some examples that are using blocking operations to interact with the database it is worth noting that scrapy is built on top of twisted library, meaning that in its core there is only a single thread with a single loop for all operations, so when you do something like:
self.cursor.execute(...)
the entire system is waiting for a response from the database, including http requests that are waiting to be executed etc.
having said that, I suggest you'll check this code snippet https://github.com/riteshk/sc/blob/master/scraper/pipelines.py
using twisted.enterprise.adbapi.ConnectionPool is a little more complex than a simple blocking database access code but it plays well with the way scrapy uses io operations

Python R/W to text file network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What could happen if multiple users run the same copies of python script which designed to R/W data to a single text file store in network device at the same time?
Will the processes stop working?
If so, what could be the solution?
It can happen many bad things, I don't think the processes stop working, not at least because of concurrent access to file a file, but what could happen is and inconsistent file creation: for example, if one processes write hello, and there is a concurrent access to the file, you might get a line like hhelllolo
A solution I can see is, use a database as suggested, or, create a mechanism for locking the file to concurrent accesses (which might be cumbersome because you're working on network, not the same computer)
Another solution I can think of is create a server side simple script who handle the requests and lock the file for concurrent access. This is almost the same solution as using a database, you'll be creating an storage system from scratch so why bother :)
Hope this helps!

Categories

Resources