Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
I wanted to follow up on this and I found something that I believe is 'my answer'. It appears that Windows (vista, which is what I notice this on) is not as fast in handling files. This was mentioned by tony-p-lee.
I found this comparisons of Ubuntu vs Vista vs Win7. Their results are interesting and like they say, you need to take the results with a grain of salt. But I think the results lead me to the cause. Python, which I feel was indirectly tested, is about equivalent if not a tad-bit faster on Windows.. See the section "Richards benchmark".
Here is their graph for file transfers:
(source: tuxradar.com)
I think this specifically help address the question because Hg is really just a series of file reads, copies and overall handling. Its likely this is causing the delay.
http://www.tuxradar.com/content/benchmarked-ubuntu-vs-vista-vs-windows-7
No real numbers here but it certainly feels like the start up time is slower on Windows platforms. I regularly switch between Ubuntu at home and Windows 7 at work and it's an order of magnitude faster starting up on Ubuntu, despite my work machine being at least 4x the speed.
As for runtime performance, it feels about the same for "quiet" applications. If there are any GUI operations using Tk on Windows, they are definitely slower. Any console applications on windows are slower, but this is most likely due to the Windows cmd rendering being slow more than python running slowly.
Maybe the python has more depend on a lot of files open (import different modules).
Windows doesn't handle file open as efficiently as Linux.
Or maybe Linux probably have more utilities depend on python and python scripts/modules are more likely to be buffered in the system cache.
I run Python locally on Windows XP and 7 as well as OSX on my Macbook. I've seen no noticable performance differences in the command line interpreter, wx widget apps run the same, and Django apps also perform virtually identically.
One thing I noticed at work was that the Kaspersky virus scanner tended to slow the python interpreter WAY down. It would take 3-5 seconds for the python prompt to properly appear and 7-10 seconds for Django's test server to fully load. Properly disabling its active scanning brought the start up times back to 0 seconds.
With the OS and network libraries, I can confirm slower performance on Windows, at least for versions =< 2.6.
I wrote a CLI podcast-fetcher script which ran great on Ubuntu, but then wouldn't download anything faster than about 80 kB/s (where ~1.6 MB/s is my usual max) on either XP or 7.
I could partially correct this by tweaking the buffer size for download streams, but there was definitely a major bottleneck on Windows, either over the network or IO, that simply wasn't a problem on Linux.
Based on this, it seems that system and OS-interfacing tasks are better optimized for *nixes than they are for Windows.
Interestingly I ran a direct comparison of a popular Python app on a Windows 10 x64 Machine (low powered admittedly) and a Ubuntu 14.04 VM running on the same machine.
I have not tested load speeds etc, but am just looking at processor usage between the two. To make the test fair, both were fresh installs and I duplicated a part of my media library and applied the same config in both scenarios. Each test was run independently.
On Windows Python was using 20% of my processor power and it triggered System Compressed Memory to run up at 40% (this is an old machine with 6GB or RAM).
With the VM on Ubuntu (linked to my windows file system) the processor usage is about 5% with compressed memory down to about 20%.
This is a huge difference. My trigger for running this test was that the app using python was running my CPU up to 100% and failing to operate. I have now been running it in the VM for 2 weeks and my processor usage is down to 65-70% on average. So both on a long and short term test, and taking into account the overhead of running a VM and second operating system, this Python app is significantly faster on Linux. I can also confirm that the Python app responds better, as does everything else on my machine.
Now this could be very application specific, but it is at minimum interesting.
The PC is an old AMD II X2 X265 Processor, 6GB of RAM, SSD HD (which Python ran from but the VM used a regular 5200rpm HD which gets used for a ton of other stuff including recording of 2 CCTV cameras).
Related
I have a new Surface Book 2 with Windows Build 18.09 on it. The processor is an i7 8.th generation (8 cores) and it has 16 GB of RAM.
When I run any type of Python Code, the performance is unbearibly slow. I really do not think it is normal Python performance on this Laptop due to the following reasons:
the resource monitor shows 5% processor usage for any python code I run. Considering 8 cores being 100%, the python process should definitely use 12,5%.
I have another Windows 2-1 tablet (Miix 520) that has an i7 7th generation processor and that is normally throattling a lot. Still this tablet runs the same python code with the same python interpreter around 60% faster - not to speak of my Linux laptop with i7 7th generation running the code around 4-5 times faster.
I have no clue what I can do to get appropriate python performance. One comment I found elsewhere was the explanation that Windows Defender is slowing down python processes. I can not deactivate it because it is a working computer that is partially managed by IT. However, I can blacklist folders and files which I did for the whole Anaconda folder - I use Anaconda in order to manage python environments on Windows - and for python.exe. Unfortunately, this did not bring any improvements.
Does anyone has any experiences / explanations for such low performance of python on Windows (or the Surface Book 2 in particular)? Does any one have suggestions what could be done in order to get "normal" python performance?
It turned out that Windows Defender is slowing down the execution of of python processes.
Blacklisting python.exe and the folder from where I execute my script in Windows Defender leads to a significant performance boost.
Another reason, I found out about, is that Windows seems to have lower disc access rates than Linux. This was significant in my case because I processed 50.000 images.
I'm trying to profile a python application in pycharm, however when the application terminates and the profiler results are displayed Pycharm requires all 16gb of RAM that I have, which makes pycharm unusable.
Said python application is doing reinforcement learning, so it does take a bit of time to run (~10 min or so), however while running it does not require large amounts of RAM.
I'm using the newest version of PyCharm on Ubuntu 16.04 and CProfile is used by Pycharm for profiling.
I'd be very glad if one of you knows a solution.
EDIT: It seems this was an issue within PyCharm, which has since been fixed (as of 2017/11/21)
It's a defect within PyCharm: https://youtrack.jetbrains.com/issue/PY-25768
I am targetting an embedded platform with linux_rt, and would like to compile cpython. I am not asking whether python is appropriate for realtime, or its latency. I AM asking about compiling under platform constraints.
I would like an interpretter embedded in a C shared library, but will also accept an exectuable binary if needs be.
Any C compiling ive done is for mainstream OS deployment, and i usually just hit make install. Im not afraid to get a little dirty, but am afraid of longterm maintenance and repeatability.
To avoid as much memory overhead as possible, are there any compiler configurations that can be changed from defaults? Can I easily strip sections of the standard library I know will not be needed?
Target platform has a 600 MHz Celeron, and 256mb RAM. The required firmware is built for a v2.6 kernel (might be 2.4). The default OS image uses Busybox, and most standard system libraries are minimally available. The root filesystem is around 100mB (flash), although I will have an external memory card mounted and can extended root onto there.
Python should have 70% Cpu and 128mB ram at most times, although I could imagine sloppy execution of the interpretter at times, and on RT linux, that could start to add up. Just trying to take precautions before I dive in.
Looking for simple Do's or Don'ts. Reference to similar projects would be great, but I really want to stick with CPython where possible.
I do not have the target platform in the shop yet, so I cannot post any tests. Will have the unit in 2 weeks and will update this post, at that time, if needed.
make a VM with the target configuration to help you get started. VirtualBox or QEmu. If you don't have a root FS one place to start is TinyCore, which is very small, configurable, but also can run on your laptop -- http://www.linuxjournal.com/article/11023
I use Eclipse with PyDev to develop Python code, and I wouldn't want to miss all its useful IDE features. One thing is a little annoying: The latency between when I type and when the source code changes is a little to high. (Not as snappy as, for example, Sublime Text 2.)
Is this due to overhead of some editor features which can be disabled? Can I do something to tune my editor settings for speed and responsiveness?
File size might be a factor, but it's hard to tell. It is not fast enough with small files.
System Info:
MacBookAir3,2
Mac OS X 10.6.8
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04-415-10M3646)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01-415, mixed mode)
Eclipse IDE for Java Developers
Version: Indigo Service Release 2
Build id: 20120216-1857
PyDev Version 2.5.0
If you're using OpenJDK, switching to Oracle JDK also improves responsiveness.
This may not be what you are looking for, but anyway:
Stop and deactivate unnessesary services.
Increase RAM
Configure or deactivate resident software, like antivirus and such. If you can deactivate for a short period of time just to find out if it changes something, without compromising security.
Switch your CPU, depending on your mainboard specs, sometimes mainboards come wiht a low cost CPU and an upgade is possible.
Get a bigger and faster machine
I'll be taking a Python-based computer science class next semester using my MacBook Pro. It will be centered around a custom-designed package for this class. The problem is that this package is being sponsored by Microsoft Research, so it was obviously designed with Windows in mind. Supposedly, it runs on Mac OS and Linux too, but they say they don't officially support Snow Leopard whatsoever.
My concern is that there will be some sort of miniscule differences between the Python code on a Mac and on a PC. The homework is submitted online, and is graded for results. Apparently, they don't actually look at the code itself.
Is this a concern? Should I install Windows in a VM/partition and be done with it? Or should I stay where I feel most comfortable? After all, switching back and forth constantly would be a huge hassle. Thanks for your help!
If the class expects the code to run on Windows then I would install a VM with Windows on it since it is possible that some things may not work quite the same way (especially if you are doing system-specific things like file-system access or executing OS commands).
Classwork/homework always goes smoother when you have the exact same environment as the professor and the rest of the class.
Definitely start with Mac. If it turns out that it really does need Windows, you can switch once you're sure. But Python development is definitely more natural on a Unix-based machine.
Most online graders will let you submit multiple times, and the first assignment is usually easy, so you should know pretty quickly if using a Mac is causing you problems. In the meantime though, you'll have a much smoother ride doing Python on a Mac than on Windows.
If they will be testing your code on windows then you really need to be targeting that platform. However if you feel more comfortable on the Mac, do your dev there but also run a virtual win machine so you can test on the target platform. I would suggest the excellent VirtualBox. You can share local folders with the VM, which reduces the pain of switching back and forth, once the VM has python setup you can just hop in and and run the code direct from the directory on the Mac you developed in.
From their site it looks like Mac is fully supported (up to 10.5 -- it's true that 10.6 is different enough to give occasional problems... I haven't upgraded yet even though I did buy a family pack of 10.5 to 10.6 upgrades, as I'm not looking for trouble right now). If you can use a Macbook with 10.5, I'd say to go for it -- the familiarity and extra productivity are worth the miniscule risk that despite all their claims of support something goes wrong (and you can in fact download and start testing right now!). If your Mac options are limited to 10.6, then I'd go for a VMWare or Parallels VM with a Windows (not sure if Windows 7 is fully supported yet, maybe XP is a more prudent option) installation instead.
Develop and test on a Mac. If it works on the Mac, then test it on Windows before submitting. Done this tons of times with my own programming courses, albeit with a different set of languages and technologies.
Go Mac and never go back.
More seriously, a Mac offers UNIX environment, and Windows offers blue screens.