VS Code is consuming lot of memory. Why? - python

I recently installed Visual Studio Code on my Ubuntu-20.04 (4GB RAM). It was consuming about 200-300 MB, without any extensions installed (which, acc. to me is too much).
I installed Python extension from Microsoft. It seemed like a small extension in the beginning, but after installation, it is literally gobbling up all the memory (~1.5GB!).
Please refer to the link below.
Why is this happening? Is there any issue with the extension, or with VS Code itself?
This is my system readings once the extension is up and running
Thank you.

So, your vscode extension is taking up 1.5 GB of RAM. And you want to know why.
Actually the python extension you installed is communicating with the microsoft servers to provide you with intellisense (intelligently autocompletes code and highlights errors) in realtime.
As it is evident from your screenshot that the culprit is Microsoft.Python.LanguageServer.
Language Server is a special kind of Visual Studio Code extension that powers the editing experience for many programming languages. With Language Servers, you can implement autocomplete, error-checking (diagnostics), jump-to-definition, and many other language features supported in VS Code.
My suggestion:
I would suggest you use vscodium, it is an open-source alternative to vs code. Basically it is same as vscode just replacing the proprietary bits from microsoft. You can also get the python extension and it doesn't hog down the ram that much.
Attesting the screenshot to prove my point.
Use this command to install vscodium
sudo apt update && sudo apt-get codium
My system configuration
core i5 8th gen
8 GB ram
Parrot Security OS 4.10 (parrot is more resource hungry than ubuntu).
EDIT: Now the consumption is reduced from 1.5 GB to around 600 Mb after switching to codium. I think its a win-win.

Related

Python VS code taking too much memory and taking too long to auto complete

I am a beginner learning to program python using VS code so my knowledge about both the VS code and the python extension is limited. I am facing two very annoying problems.
Firstly, when the python extension starts the memory usage of vs code jumps from ~300 mb to 1-1.5 Gbs. If I have any thing else open then everything gets extremely sluggish. This seems to me a bit abnormal. I have tried disabling all other extensions but the memory consumption remains the same. Is there a way (or some settings that I can change to reduce the memory consumption?
Secondly, the intellisense autocomplete takes quite a bit of time (sometimes 5-10 mins) before it starts to kick in. Also it stops working sometimes completely. Any pointers what could be causing that?
PS: I am using VS code version 1.50 (September update) and python anaconda 4.8.3.
VSCode as a code editor, in addition to the memory space occupied by VSCode itself, it needs to download the corresponding language services and language extensions to support, so it occupies some memory space.
For memory, it is recommended that you uninstall unnecessary third-party extensions and duplicate language services. In addition, this is a good habit if we use virtual environments in VSCode. The folder of the virtual environment exists in the project, and the installation package is stored in the project without occupying system resources.
For automatic completion, this function is provided by the corresponding language service and extension. please try to reload VSCode and wait for the language service to load before editing the code.
Therefore, you can try to use the extension "Pylance", which not only provides outstanding language service functions but also provides automatic completion.
At least for the intellisense, you could try changing
"python.jediEnabled": false
in your settings.json file. This will allow you to use a newer version of the intellisense, but it might need to download first.
But beyond that, I’d suggest using Pycharm instead. It’s quite snappy, and it has a free version.

What can I do to make Eclipse PyDev editor more reactive?

I use Eclipse with PyDev to develop Python code, and I wouldn't want to miss all its useful IDE features. One thing is a little annoying: The latency between when I type and when the source code changes is a little to high. (Not as snappy as, for example, Sublime Text 2.)
Is this due to overhead of some editor features which can be disabled? Can I do something to tune my editor settings for speed and responsiveness?
File size might be a factor, but it's hard to tell. It is not fast enough with small files.
System Info:
MacBookAir3,2
Mac OS X 10.6.8
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04-415-10M3646)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01-415, mixed mode)
Eclipse IDE for Java Developers
Version: Indigo Service Release 2
Build id: 20120216-1857
PyDev Version 2.5.0
If you're using OpenJDK, switching to Oracle JDK also improves responsiveness.
This may not be what you are looking for, but anyway:
Stop and deactivate unnessesary services.
Increase RAM
Configure or deactivate resident software, like antivirus and such. If you can deactivate for a short period of time just to find out if it changes something, without compromising security.
Switch your CPU, depending on your mainboard specs, sometimes mainboards come wiht a low cost CPU and an upgade is possible.
Get a bigger and faster machine

Python Performance on Windows

Is Python generally slower on Windows vs. a *nix machine? Python seems to blaze on my Mac OS X machine whereas it seems to run slower on my Window's Vista machine. The machines are similar in processing power and the vista machine has 1GBs more memory.
I particularly notice this in Mercurial but I figure this may simply be how Mercurial is packaged on windows.
I wanted to follow up on this and I found something that I believe is 'my answer'. It appears that Windows (vista, which is what I notice this on) is not as fast in handling files. This was mentioned by tony-p-lee.
I found this comparisons of Ubuntu vs Vista vs Win7. Their results are interesting and like they say, you need to take the results with a grain of salt. But I think the results lead me to the cause. Python, which I feel was indirectly tested, is about equivalent if not a tad-bit faster on Windows.. See the section "Richards benchmark".
Here is their graph for file transfers:
(source: tuxradar.com)
I think this specifically help address the question because Hg is really just a series of file reads, copies and overall handling. Its likely this is causing the delay.
http://www.tuxradar.com/content/benchmarked-ubuntu-vs-vista-vs-windows-7
No real numbers here but it certainly feels like the start up time is slower on Windows platforms. I regularly switch between Ubuntu at home and Windows 7 at work and it's an order of magnitude faster starting up on Ubuntu, despite my work machine being at least 4x the speed.
As for runtime performance, it feels about the same for "quiet" applications. If there are any GUI operations using Tk on Windows, they are definitely slower. Any console applications on windows are slower, but this is most likely due to the Windows cmd rendering being slow more than python running slowly.
Maybe the python has more depend on a lot of files open (import different modules).
Windows doesn't handle file open as efficiently as Linux.
Or maybe Linux probably have more utilities depend on python and python scripts/modules are more likely to be buffered in the system cache.
I run Python locally on Windows XP and 7 as well as OSX on my Macbook. I've seen no noticable performance differences in the command line interpreter, wx widget apps run the same, and Django apps also perform virtually identically.
One thing I noticed at work was that the Kaspersky virus scanner tended to slow the python interpreter WAY down. It would take 3-5 seconds for the python prompt to properly appear and 7-10 seconds for Django's test server to fully load. Properly disabling its active scanning brought the start up times back to 0 seconds.
With the OS and network libraries, I can confirm slower performance on Windows, at least for versions =< 2.6.
I wrote a CLI podcast-fetcher script which ran great on Ubuntu, but then wouldn't download anything faster than about 80 kB/s (where ~1.6 MB/s is my usual max) on either XP or 7.
I could partially correct this by tweaking the buffer size for download streams, but there was definitely a major bottleneck on Windows, either over the network or IO, that simply wasn't a problem on Linux.
Based on this, it seems that system and OS-interfacing tasks are better optimized for *nixes than they are for Windows.
Interestingly I ran a direct comparison of a popular Python app on a Windows 10 x64 Machine (low powered admittedly) and a Ubuntu 14.04 VM running on the same machine.
I have not tested load speeds etc, but am just looking at processor usage between the two. To make the test fair, both were fresh installs and I duplicated a part of my media library and applied the same config in both scenarios. Each test was run independently.
On Windows Python was using 20% of my processor power and it triggered System Compressed Memory to run up at 40% (this is an old machine with 6GB or RAM).
With the VM on Ubuntu (linked to my windows file system) the processor usage is about 5% with compressed memory down to about 20%.
This is a huge difference. My trigger for running this test was that the app using python was running my CPU up to 100% and failing to operate. I have now been running it in the VM for 2 weeks and my processor usage is down to 65-70% on average. So both on a long and short term test, and taking into account the overhead of running a VM and second operating system, this Python app is significantly faster on Linux. I can also confirm that the Python app responds better, as does everything else on my machine.
Now this could be very application specific, but it is at minimum interesting.
The PC is an old AMD II X2 X265 Processor, 6GB of RAM, SSD HD (which Python ran from but the VM used a regular 5200rpm HD which gets used for a ton of other stuff including recording of 2 CCTV cameras).

Python on windows7 intel 64bit

I've been messing around with Python over the weekend and find myself pretty much back at where I started.
I've specifically been having issues with easy_install and nltk giving me errors about not finding packages, etc.
I've tried both Python 2.6 and Python 3.1.
I think part of the problem may be that I'm running windows 7 in 64bit mode on an Intel T5750 chipset.
I'm thinking of downloading Python for windows extension http://sourceforge.net/projects/pywin32/files/, but not sure which version to get.
Why do packages have a specific AMD64, but not intel?
However, this may not even solve my problems. Any recommendations on getting Python to work in this environment?
I've currently got Python 3.1 installed, and removed 2.6
The most popular 64-bit mode for "86-oid" processor is commonly known as AMD64 because AMD first came up with it (Intel at that time was pushing Itanium instead, and that didn't really catch fire -- it's still around but I don't even know if Win7 supports it); Intel later had to imitate that mode to get into the mass-64 bit market, but it's still commonly known as AMD64 after its originator. For Windows 7 in 64-bit mode, AMD64 seems likely to be what you want.
The 64-bit-Windows downloads from activestate come with a few important pieces that aren't part of the standard python.org 64-bit Windows builds, and might perhaps make your life easier.

Do I only need to check the users machine for the version of the MSVCR90.dll that was installed with my python installation?

I was working on an update to my application and before I began I migrated to 2.62 because it seemed to be the time to. I walked right into the issue of having problems building my application using py2exe because of the MSVCR90.dlls. There seems to be a fair amount of information on how to solve this issue, including some good answers here on SO.
I am deploying to users that more likely than not have 32 bit XP or Vista machines. Some of my users will be migrated to 64 bit Vista in the near future. My understanding of these issues is that I have to make sure they have the correct dlls that relate to the version of python that exists on the application development computer. Since I have an x86 processor then they need the x86 version of the dlls. The configuration of their computer is irrelevant.
Is this correct or do I have to account for their architecture if I am going to deliver the dlls as private assemblies?
Thanks for any responses
Vista 64bit has a 32 bit emulator I believe, so you will not need to worry about this.
However, I would just tell them to install the msvcrt runtime which is supposed to be the correct way to deal with this sxs mess.
From what I have gathered and learned the correct answer is that I have to worry about the MSCVCR90 dll that is used in the version of Python and mx that the application I am building rely on. This is important because it means that if the user has a different configuration I can't easily fix that problem unless I do some tricks to install the correct dll. If I have them download the MS installer from MS and their hardware (CPU type) does not match mine then they will potentially run into problems. There is a really good set of instructions on the wxpython users group site. WX Discussion.

Categories

Resources