I've built an IB TWS application in python. All seems to work fine, but I'm struggling with one last element.
TWS requires a daily logout or restart. I've opted for the daily restart at a set time so I could easily anticipate a restart of my application at certain times (at least, so I thought.)
My program has one class, called InteractiveBrokersAPI which subclasses the ECClient and EWrapper. Upon the start of my program, I create this instance and it successfully connects to and works with TWS. Now, say that TWS restarts daily at 23:00. I have implemented logic in my program that creates a new instance of my InteractiveBrokersAPI, and calls run() on it af 23:15. This too seems to work. I know this because upon creation, InteractiveBrokersAPI calls reqAccountUpdates() and I can see these updates coming in after my restart. When I try to actually commit a trade the next day, I get an error that it's not connected.
Does anyone else have experience in how to handle this? I am wondering how others have fixed this issue. Any guidance would be highly appreciated.
Well, this doesnt exactly answer your question, but have you looked at ib_insync
Related
I've made a trading bot that uses a c++ .exe for the backend (compute the predictions) and a python .exe for the the frontend (UI, placing trades, keeping track of trades, fetching market data, etc..). Currently I'm running it simply on my laptop, the backend only uses ~1mb process memory at any point, while the frontend uses ~72mb at any point. (The Python memory is calculated using this code:
import os, psutil
while Process_is_running:
process = psutil.Process(os.getpid())
print(process.memory_info().rss)
)
I have never worked with web based applications (besides the python-binance api I guess) or any VPS type service. I am a self taught programmer of only 7 months, roughly.
I just want a basic nudge in the right direction, hopefully somewhere I can read up on the best way to do this.
The details of the program are as follows:
The Frontend automatically logs in to Binance, of course if it runs 24/7 this will only happens once, but if something goes wrong and it has to restart it would log in by itself, though I dont mind receiving a webhook notification or something of the sort to notify me of an event like this so I can log in manually.
The frontend simply sends "commands" and market data to the backend and then the backend simply sends the prediction back and current state of the algorithm. (ie.. "is predicting", "on stand by", "is training")
the reason for doing this is that my location has very unreliable power supply and not very good internet, so it often has to reboot and if it stays offline for too long, of course I might loose money or the program might lose track of the latest trades.
So in Summary: Can anyone just point me in the right direction where I can look for information on this topic, specifically related to my situation? Normally I would spend the time myself, but I am on a massive time constraint here so any help will be appreciated :)
I'm also implementing a bot. So cool that you are doing so as well. I think that it's really the way to go, making emotionless, data-driven trades.
Anyways, if I were you, I would start an AWS instance. Either Linux or Windows.
If you can run your software on Linux, that would be cheaper, as you won't have to pay the (somewhat small) overhead of Windows licensing.
Windows instances are fine, though. Here are the docs on getting started with AWS windows instances.
I know that you're just getting started, and you probably have multiple things that you want to do with this project. One suggestion for a direction that you could take is to go serverless. Of course there will be some server, but AWS can abstract that away from you to where you. This can make it both cheaper to run your bot and simpler to manage.
I need to integrate Stripe into a Django project and I noticed that there's a stripe-python package. This runs entirely synchronously though. Is it a bad idea to make these types of call from the main web server? Since it makes external calls, this presumably means the webserver will be blocked while we wait for a response, which seems bad.
So, should I be running this from something like Celery? Or is it fine to run on the main thread? Anyone have experience with this?
Based on a previous project, I think using it synchronously is much better from a design prospective. WIth most payments, you want to keep the user on the page until the payment goes through so they know for certain that there was no issue with the payment and you can handle any issues with the payment right there rather than taking the task from the queue and handling it. If you think about most payments you have done online, these all are happening in the main thread for this reason
I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you.
Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.
The main question will be how to ensure it only runs once for a particular version.
Here is an outline on how you might approach it.
You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.
Then make sure you increment your version, when ever you deploy your new code.
In you warmup handler or appengine_config.py grab the version deployed,
then in a transaction try and fetch the new HasRun entity by Key (version number).
If you get the Entity then don't run the one time code.
If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.
Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.
Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once.
It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.
Or a combination/variation of all of the above.
I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically.
I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server.
Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions!
Thanks!
Here are two approaches to this problem, both of which require shell access to the cloud server.
Write the program to handle the scheduling itself. For example, sleep and wake up every few miliseconds to perform the necessary checks. You would then transfer this file to the server using a tool like scp, login, and start it in the background using something like python myscript.py &.
Write the program to do a single run only, and use the scheduling tool cron to start it up every minute of the day.
Took a few days but I finally got a way to work this out. The most practical way to get this working is to use a VPS that runs the script. The confusing part of my code was that each user would activate the script at a different time for themselves. To do this, say at midnight, the VPS runs the python script (using scheduled tasking or something similar) and runs the script. the script would then pull times from a database and process the code at those times outlined.
Thanks for your time anyways!
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up.
However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously.
Any tips on what I should be looking at, or any ideas of why this is happening?
Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB.
Update:
So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance.
Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping.
The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up?
Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status?
Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
My solution to this was to increase the Pending Latency time.
If a webpage fires 3 ajax requests at once, AppEngine was launching new instances for the additional requests. After configuring the Minimum Pending Latency time - setting it to 2.5 secs, the same instance was processing all three requests and throughput was acceptable.
My project still has little load/traffic... so in addition to raising the Pending Latency, I openend an account at Pingdom and configured it to ping my Appengine project every minute.
The combination of both, makes that I have one instance that stays alive and is serving up all requests most of the time. It will scale to new instances when really necessary.
1 idle instance means that app-engine will always fire up an extra instance for the next user that comes along - that's why you are seeing an extra instance fired up with that setting.
If you remove the idle instance setting (or use the default) and just increase pending latency it should "wait" before firing the extra instance.
With regards to the main question I think #koma might be onto something in saying that with default settings app-engine will tend to fire extra instances even if the requests are coming from the same session.
In my experience app-engine is great under heavy traffic but difficult (and sometimes frustrating) to work with under low traffic conditions. In particular it is very difficult to figure out the nuances of what the criteria for firing up new instances actually are.
Personally, I have a "wake-up" cron-job to bring up an instance every couple of minutes to make sure that if someone comes on the site an instance is ready to serve. This is not ideal because it will eat at my quote, but it works most of the time because traffic on my app is reasonably high.
I only started having this type of issue on Monday February 4 around 10 pm EST, and is continuing until now. I first started noticing then that instances kept firing up and shutting down, and latency increased dramatically. It seemed that the instance scheduler was turning off idle instances too rapidly, and causing subsequent thrashing.
I set minimum idle instances to 1 to stabilize latency, which worked. However, there is still thrashing of new instances. I tried the recommendations in this thread to only set minimum pending latency, but that does not help. Ultimately, idle instances are being turned off too quickly. Then when they're needed, the latency shoots up while trying to fire up new instances.
I'm not sure why you saw this a couple weeks ago, and it only started for me a couple days ago. Maybe they phased in their new instance scheduler to customers gradually? Are you not still seeing instances shutting down quickly?