Script from stdin use case - python

Taking e.g. Python as a good example of a modern scripting language, it has the option of reading a program (as opposed to input data for the program) from stdin. The REPL is the obvious use case where stdin is a terminal, but it's also designed to handle the scenario where it's not a terminal.
What use cases are there for reading the program itself from noninteractive stdin?
(The reason I ask is that I'm working on a scripting language myself, and wondering whether this is an important feature to provide, and if so, what the specifics need to look like.)

If you want to execute code generated by some tool it could be useful to be able to pipe the generated into your interpreter/compiler..
Simply support it ;) Checking if stdin is a tty or not is not hard anyway.

Related

When should I use subprocess.Popen instead of os.popen?

Seems both executes a subprocess and create a pipe to do in/out, just that the subprocess is newer.
My question is, is there any function that subprocess.Popen can do while os.popen cannot, so that we need the new module subprocess?
Why Python language didn't choose to enhance os.popen but created a new module?
Short answer: Never use os.popen, always use subprocess!
As you can see from the Python 2.7 os.popen docs:
Deprecated since version 2.6: This function is obsolete. Use the
subprocess module. Check especially the Replacing Older Functions
with the subprocess
Module section.
There were various limitations and problems with the old os.popen family of functions. And as the docs mention, the pre 2.6 versions weren't even reliable on Windows.
The motivation behind subprocess is explained in PEP 324 -- subprocess - New process module:
Motivation
Starting new processes is a common task in any programming language,
and very common in a high-level language like Python. Good support for
this task is needed, because:
Inappropriate functions for starting processes could mean a
security risk: If the program is started through the shell, and
the arguments contain shell meta characters, the result can be
disastrous. [1]
It makes Python an even better replacement language for
over-complicated shell scripts.
Currently, Python has a large number of different functions for
process creation. This makes it hard for developers to choose.
The subprocess module provides the following enhancements over
previous functions:
One "unified" module provides all functionality from previous
functions.
Cross-process exceptions: Exceptions happening in the child
before the new process has started to execute are re-raised in
the parent. This means that it's easy to handle exec()
failures, for example. With popen2, for example, it's
impossible to detect if the execution failed.
A hook for executing custom code between fork and exec. This
can be used for, for example, changing uid.
No implicit call of /bin/sh. This means that there is no need
for escaping dangerous shell meta characters.
All combinations of file descriptor redirection is possible.
For example, the "python-dialog" [2] needs to spawn a process
and redirect stderr, but not stdout. This is not possible with
current functions, without using temporary files.
With the subprocess module, it's possible to control if all open
file descriptors should be closed before the new program is
executed.
Support for connecting several subprocesses (shell "pipe").
Universal newline support.
A communicate() method, which makes it easy to send stdin data
and read stdout and stderr data, without risking deadlocks.
Most people are aware of the flow control issues involved with
child process communication, but not all have the patience or
skills to write a fully correct and deadlock-free select loop.
This means that many Python applications contain race
conditions. A communicate() method in the standard library
solves this problem.
Please see the PEP link for the Rationale, and further details.
Aside from the safety & reliability issues, IMHO, the old os.popen family was cumbersome and confusing. It was almost impossible to use correctly without closely referring to the docs while you were coding. In comparison, subprocess is a godsend, although it's still wise to refer to the docs while using it. ;)
Occasionally, one sees people recommending the use of os.popen rather than subprocess.Popen in Python 2.7, eg Python subprocess vs os.popen overhead because it's faster. Sure, it's faster, but that's because it doesn't do various things that are vital to guarantee that it's working safely!
FWIW, os.popen itself still exists in Python 3, however it's safely implemented via subprocess.Popen, so you might as well just use subprocess.Popen directly yourself. The other members of the os.popen family no longer exist in Python 3. The os.spawn family of functions still exist in Python 3, but the docs recommend that the more powerful facilities provided by the subprocess module be used instead.

Why does my Python script run faster than the Shell script? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Obviously Python is more user friendly, a quick search on google shows many results that say that, as Python is byte-compiled is usually faster. I even found this that claims that you can see an improvement of over 2000% on dictionary-based operations.
What is your experience on this matter? In which kind of task each one is a clear winner?
Typical mainframe flow...
Input Disk/Tape/User (runtime) --> Job Control Language (JCL) --> Output Disk/Tape/Screen/Printer
| ^
v |
`--> COBOL Program --------'
Typical Linux flow...
Input Disk/SSD/User (runtime) --> sh/bash/ksh/zsh/... ----------> Output Disk/SSD/Screen/Printer
| ^
v |
`--> Python script --------'
| ^
v |
`--> awk script -----------'
| ^
v |
`--> sed script -----------'
| ^
v |
`--> C/C++ program --------'
| ^
v |
`--- Java program ---------'
| ^
v |
: :
Shells are the glue of Linux
Linux shells like sh/ksh/bash/... provide input/output/flow-control designation facilities much like the old mainframe Job Control Language... but on steroids! They are Turing complete languages in their own right while being optimized to efficiently pass data and control to and from other executing processes written in any language the O/S supports.
Most Linux applications, regardless what language the bulk of the program is written in, depend on shell scripts and Bash has become the most common. Clicking an icon on the desktop usually runs a short Bash script. That script, either directly or indirectly, knows where all the files needed are and sets variables and command line parameters, finally calling the program. That's a shell's simplest use.
Linux as we know it however would hardly be Linux without the thousands of shell scripts that startup the system, respond to events, control execution priorities and compile, configure and run programs. Many of these are quite large and complex.
Shells provide an infrastructure that lets us use pre-built components that are linked together at run time rather than compile time. Those components are free-standing programs in their own right that can be used alone or in other combinations without recompiling. The syntax for calling them is indistinguishable from that of a Bash builtin command, and there are in fact numerous builtin commands for which there is also a stand-alone executable on the system, often having additional options.
There is no language-wide difference between Python and Bash in performance. It entirely depends on how each is coded and which external tools are called.
Any of the well known tools like awk, sed, grep, bc, dc, tr, etc. will leave doing those operations in either language in the dust. Bash then is preferred for anything without a graphical user interface since it is easier and more efficient to call and pass data back from a tool like those with Bash than Python.
Performance
It depends on which programs the Bash shell script calls and their suitability for the subtask they are given whether the overall throughput and/or responsiveness will be better or worse than the equivalent Python. To complicate matters Python, like most languages, can also call other executables, though it is more cumbersome and thus not as often used.
User Interface
One area where Python is the clear winner is user interface. That makes it an excellent language for building local or client-server applications as it natively supports GTK graphics and is far more intuitive than Bash.
Bash only understands text. Other tools must be called for a GUI and data passed back from them. A Python script is one option. Faster but less flexible options are the binaries like YAD, Zenity, and GTKDialog.
While shells like Bash work well with GUIs like Yad, GtkDialog (embedded XML-like interface to GTK+ functions), dialog, and xmessage, Python is much more capable and so better for complex GUI windows.
Summary
Building with shell scripts is like assembling a computer with off-the-shelf components the way desktop PCs are.
Building with Python, C++ or most any other language is more like building a computer by soldering the chips (libraries) and other electronic parts together the way smartphones are.
The best results are usually obtained by using a combination of languages where each can do what they do best. One developer calls this "polyglot programming".
Generally, bash works better than python only in those environments where python is not available. :)
Seriously, I have to deal with both languages daily, and will take python instantly over bash if given the choice. Alas, I am forced to use bash on certain "small" platforms because someone has (mistakenly, IMHO) decided that python is "too large" to fit.
While it is true that bash might be faster than python for some select tasks, it can never be as quick to develop with, or as easy to maintain (at least after you get past 10 lines of code or so). Bash's sole strong point wrt python or ruby or lua, etc., is its ubiquity.
Developer efficiency matters much more to me in scenarios where both bash and Python are sensible choices.
Some tasks lend themselves well to bash, and others to Python. It also isn't unusual for me to start something as a bash script and change it to Python as it evolves over several weeks.
A big advantage Python has is in corner cases around filename handling, while it has glob, shutil, subprocess, and others for common scripting needs.
When you writing scripts performance does not matter (in most cases).
If you care about performance 'Python vs Bash' is a false question.
Python:
+ easier to write
+ easier to maintain
+ easier code reuse (try to find universal error-proof way to include files with common code in sh, I dare you)
+ you can do OOP with it too!
+ easier arguments parsing. well, not easier, exactly. it still will be too wordy to my taste, but python have argparse facility built in.
- ugly ugly 'subprocess'. try to chain commands and not to cry a river how ugly your code will become. especially if you care about exit codes.
Bash:
+ ubiquity, as was said earlier, indeed.
+ simple commands chaining. that's how you glue together different commands in a simple way. Also Bash (not sh) have some improvements, like pipefail, so chaining is really short and expressive.
+ do not require 3rd-party programs to be installed. can be executed right away.
- god, it's full of gotchas. IFS, CDPATH.. thousands of them.
If one writing a script bigger than 100 LOC: choose Python
If one need path manipulation in script: choose Python(3)
If one need somewhat like alias but slightly complicated: choose Bash/sh
Anyway, one should try both sides to get the idea what are they capable of.
Maybe answer can be extended with packaging and IDE support points, but I'm not familiar with this sides.
As always you have to choose from turd sandwich and giant douche.
And remember, just a few years ago Perl was new hope. Where it is now.
Performance-wise bash outperforms python in the process startup time.
Here are some measurements from my core i7 laptop running Linux Mint:
Starting process Startup time
empty /bin/sh script 1.7 ms
empty /bin/bash script 2.8 ms
empty python script 11.1 ms
python script with a few libs* 110 ms
*Python loaded libs are: os, os.path, json, time, requests, threading, subprocess
This shows a huge difference however bash execution time degrades quickly if it has to do anything sensible since it usually must call external processes.
If you care about performance use bash only for:
really simple and frequently called scripts
scripts that mainly call other processes
when you need minimal friction between manual administrative actions and scripting - fast check a few commands and place them in the file.sh
Bash is primarily a batch / shell scripting language with far less support for various data types and all sorts of quirks around control structures -- not to mention compatibility issues.
Which is faster? Neither, because you are not comparing apples to apples here. If you had to sort an ascii text file and you were using tools like zcat, sort, uniq, and sed then you will smoke Python performance wise.
However, if you need a proper programming environment that supports floating point and various control flow, then Python wins hands down. If you wrote say a recursive algorithm in Bash and Python, the Python version will win in an order of magnitude or more.
I'm posting this late answer primarily because Google likes this question.
I believe the issue and context really should be about the workflow, not the tools. The overall philosophy is always "Use the right tool for the job." But before this comes one that many often forget when they get lost in the tools: "Get the job done."
When I have a problem that isn't completely defined, I almost always start with Bash. I have solved some gnarly problems in large Bash scripts that are both readable and maintainable.
But when does the problem start to exceed what Bash should be asked to do? I have some checks I use to give me warnings:
Am I wishing Bash had 2D (or higher) arrays? If yes, it's time to realize that Bash is not a great data processing language.
Am I doing more work preparing data for other utilities than I am actually running those utilities? If yes, time again to realize Bash is not a great data processing language.
Is my script simply getting too large to manage? If yes, it is important to realize that while Bash can import script libraries, it lacks a package system like other languages. It's really a "roll your own" language compared to most others. Then again, it has a enormous amount of functionality built-in (some say too much...)
The list goes on. Bottom-line, when you are working harder to keep your scripts running that you do adding features, it's time to leave Bash.
Let's assume you've decided to move your work to Python. If your Bash scripts are clean, the initial conversion is quite straightforward. There are even several converters / translators that will do the first pass for you.
The next question is: What do you give up moving to Python?
All calls to external utilities must be wrapped in something from the subprocess module (or equivalent). There are multiple ways to do this, and until 3.7 it took some effort to get it right (3.7 improved subprocess.run() to handle all common cases on its own).
Surprisingly, Python has no standard platform-independent non-blocking utility (with timeout) for polling the keyboard (stdin). The Bash read command is an awesome tool for simple user interaction. My most common use is to show a spinner until the user presses a key, while also running a polling function (with each spinner step) to make sure things are still running well. This is a harder problem than it would appear at first, so I often simply make a call to Bash: Expensive, but it does precisely what I need.
If you are developing on an embedded or memory-constrained system, Python's memory footprint can be many times larger than Bash's (depending on the task at hand). Plus, there is almost always an instance of Bash already in memory, which may not be the case for Python.
For scripts that run once and exit quickly, Python's startup time can be much longer than Bash's. But if the script contains significant calculations, Python quickly pulls ahead.
Python has the most comprehensive package system on the planet. When Bash gets even slightly complex, Python probably has a package that makes whole chunks of Bash become a single call. However, finding the right package(s) to use is the biggest and most daunting part of becoming a Pythonista. Fortunately, Google and StackExchange are your friends.
If you are looking to cobble together a quick utility with minimal effort, bash is good. For a wrapper round an application, bash is invaluable.
Anything that may have you coming back over and over to add improvements is probably (though not always) better suited to a language like Python as Bash code comprising over a 1000 lines gets very painful to maintain. Bash code is also irritating to debug when it gets long.......
Part of the problem with these kind of questions is, from my experience, that shell scripts are usually all custom tasks. There have been very few shell scripting tasks that I have come across where there is already a solution freely available.
There are 2 scenario's where Bash performance is at least equal I believe:
Scripting of command line utilities
Scripts which take only a short time to execute; where starting the Python interpreter takes more time than the operation itself
That said, I usually don't really concern myself with performance of the scripting language itself. If performance is a real issue you don't script but program (possibly in Python).
I don't know if this is accurate, but I have found that python/ruby works much better for scripts that have a lot of mathematical computations. Otherwise you have to use dc or some other "arbitrary precision calculator". It just becomes a very big pain. With python you have much more control over floats vs ints and it is much easier to perform a lot of computations and sometimes.
In particular, I would never work with a bash script to handle binary information or bytes. Instead I would use something like python (maybe) or C++ or even Node.JS.
Performance wise both can do equally the same, so the question becomes which saves more development time?
Bash relies on calling other commands, and piping them for creating new ones. This has the advantage that you can quickly create new programs just with the code borrowed from other people, no matter what programming language they used.
This also has the side effect of resisting change in sub-commands pretty well, as the interface between them is just plain text.
Additionally Bash is very permissive on how you can write on it. This means it will work well for a wider variety of context, but it also relies on the programmer having the intention of coding in a clean safe manner. Otherwise Bash won't stop you from building a mess.
Python is more structured on style, so a messy programmer won't be as messy. It will also work on operating systems outside Linux, making it instantly more appropriate if you need that kind of portability.
But it isn't as simple for calling other commands. So if your operating system is Unix most likely you will find that developing on Bash is the fastest way to develop.
When to use Bash:
It's a non graphical program, or the engine of a graphical one.
It's only for Unix.
When to use Python:
It's a graphical program.
It shall work on Windows.

C++ to python communication. Multiple io streams?

A python program opens a new process of the C++ program and is reading the processes stdout.
No problem so far.
But is it possible to have multiple streams like this for communication? I can get two if I misuse stderr too, but not more. Easy way to hack this would be using temporary files. Is there something more elegant that does not need a detour to the filesystem?
PS: *nix specific solutions are welcome too
On unix systems; the usual way to open a subprocess is with fork(), which will leave any open file descriptors (small integers representing open files or sockets) available in both the child, and the parent, and then exec(), which also allows the new executable to use the file descriptors that were open in the old process. This functionality is preserved in the subprocess.Popen() call (adjustable with the close_fds argument). Thus, what you probably want to do is use os.pipe() to create pairs of sockets to communicate on, then use Popen() to launch the other process, with arguments for each of fd's returned by the previous call to pipe() to tell it which fd's it should use.
Sounds like what you want are to use sockets for communication. Both languages let open raw sockets but you might want to check out the zeromq project as well which has some addition advantages for message passing. Check out their hello world in c++ and python.
assuming windows machine.
you could try using the clipboard for exchanging information between python processes and C++.
assign some unique process id followed by your information and write it to clipboard on python side.....now just parse the string on C++ side.
its akin to using temporary files but all done in memory..... but the drawback being you cannot use clipboard for any other application.
hope it helps
With traditional, synchronous programming and the standard Python library, what you're asking is difficult to accomplish. If, instead, you consider using an asynchronous programming model and the Twisted library, it's a piece of cake. The Using Processes HOWTO describes how to easily communicate with as many processes as you like. Admittedly, there's a bit of a learning curve to Twisted but it's well worth the effort.

Why are scripting languages (e.g. Perl, Python, and Ruby) not suitable as shell languages? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the differences between shell languages like Bash (bash), Z shell (zsh), Fish (fish) and the scripting languages above that makes them more suitable for the shell?
When using the command line, the shell languages seem to be much easier. It feels for me much smoother to use bash for example than to use the shell profile in IPython, despite reports to the contrary. I think most will agree with me that a large portion of medium to large scale programming is easier in Python than in Bash. I use Python as the language I am most familiar with. The same goes for Perl and Ruby.
I have tried to articulate the reason, but I am unable to, aside from assuming that the treatment of strings differently in both has something to do with it.
The reason of this question is that I am hoping to develop a language usable in both. If you know of such a language, please post it as well.
As S.Lott explains, the question needs some clarification. I am asking about the features of the shell language versus that of scripting languages. So the comparison is not about the characteristics of various interactive (REPL) environments such as history and command line substitution. An alternative expression for the question would be:
Can a programming language that is suitable for design of complex systems be at the same time able to express useful one-liners that can access the file system or control jobs? Can a programming language usefully scale up as well as scale down?
There are a couple of differences that I can think of; just thoughtstreaming here, in no particular order:
Python & Co. are designed to be good at scripting. Bash & Co. are designed to be only good at scripting, with absolutely no compromise. IOW: Python is designed to be good both at scripting and non-scripting, Bash cares only about scripting.
Bash & Co. are untyped, Python & Co. are strongly typed, which means that the number 123, the string 123 and the file 123 are quite different. They are, however, not statically typed, which means they need to have different literals for those, in order to keep them apart.
Example:
| Ruby | Bash
-----------------------------------------
number | 123 | 123
string | '123' | 123
regexp | /123/ | 123
file | File.open('123') | 123
file descriptor | IO.open('123') | 123
URI | URI.parse('123') | 123
command | `123` | 123
Python & Co. are designed to scale up to 10000, 100000, maybe even 1000000 line programs, Bash & Co. are designed to scale down to 10 character programs.
In Bash & Co., files, directories, file descriptors, processes are all first-class objects, in Python, only Python objects are first-class, if you want to manipulate files, directories etc., you have to wrap them in a Python object first.
Shell programming is basically dataflow programming. Nobody realizes that, not even the people who write shells, but it turns out that shells are quite good at that, and general-purpose languages not so much. In the general-purpose programming world, dataflow seems to be mostly viewed as a concurrency model, not so much as a programming paradigm.
I have the feeling that trying to address these points by bolting features or DSLs onto a general-purpose programming language doesn't work. At least, I have yet to see a convincing implementation of it. There is RuSH (Ruby shell), which tries to implement a shell in Ruby, there is rush, which is an internal DSL for shell programming in Ruby, there is Hotwire, which is a Python shell, but IMO none of those come even close to competing with Bash, Zsh, fish and friends.
Actually, IMHO, the best current shell is Microsoft PowerShell, which is very surprising considering that for several decades now, Microsoft has continually had the worst shells evar. I mean, COMMAND.COM? Really? (Unfortunately, they still have a crappy terminal. It's still the "command prompt" that has been around since, what? Windows 3.0?)
PowerShell was basically created by ignoring everything Microsoft has ever done (COMMAND.COM, CMD.EXE, VBScript, JScript) and instead starting from the Unix shell, then removing all backwards-compatibility cruft (like backticks for command substitution) and massaging it a bit to make it more Windows-friendly (like using the now unused backtick as an escape character instead of the backslash which is the path component separator character in Windows). After that, is when the magic happens.
They address problem 1 and 3 from above, by basically making the opposite choice compared to Python. Python cares about large programs first, scripting second. Bash cares only about scripting. PowerShell cares about scripting first, large programs second. A defining moment for me was watching a video of an interview with Jeffrey Snover (PowerShell's lead designer), when the interviewer asked him how big of a program one could write with PowerShell and Snover answered without missing a beat: "80 characters." At that moment I realized that this is finally a guy at Microsoft who "gets" shell programming (probably related to the fact that PowerShell was neither developed by Microsoft's programming language group (i.e. lambda-calculus math nerds) nor the OS group (kernel nerds) but rather the server group (i.e. sysadmins who actually use shells)), and that I should probably take a serious look at PowerShell.
Number 2 is solved by having arguments be statically typed. So, you can write just 123 and PowerShell knows whether it is a string or a number or a file, because the cmdlet (which is what shell commands are called in PowerShell) declares the types of its arguments to the shell. This has pretty deep ramifications: unlike Unix, where each command is responsible for parsing its own arguments (the shell basically passes the arguments as an array of strings), argument parsing in PowerShell is done by the shell. The cmdlets specify all their options and flags and arguments, as well as their types and names and documentation(!) to the shell, which then can perform argument parsing, tab completion, IntelliSense, inline documentation popups etc. in one centralized place. (This is not revolutionary, and the PowerShell designers acknowledge shells like the DIGITAL Command Language (DCL) and the IBM OS/400 Command Language (CL) as prior art. For anyone who has ever used an AS/400, this should sound familiar. In OS/400, you can write a shell command and if you don't know the syntax of certain arguments, you can simply leave them out and hit F4, which will bring a menu (similar to an HTML form) with labelled fields, dropdown, help texts etc. This is only possible because the OS knows about all the possible arguments and their types.) In the Unix shell, this information is often duplicated three times: in the argument parsing code in the command itself, in the bash-completion script for tab-completion and in the manpage.
Number 4 is solved by the fact that PowerShell operates on strongly typed objects, which includes stuff like files, processes, folders and so on.
Number 5 is particularly interesting, because PowerShell is the only shell I know of, where the people who wrote it were actually aware of the fact that shells are essentially dataflow engines and deliberately implemented it as a dataflow engine.
Another nice thing about PowerShell are the naming conventions: all cmdlets are named Action-Object and moreover, there are also standardized names for specific actions and specific objects. (Again, this should sound familar to OS/400 users.) For example, everything which is related to receiving some information is called Get-Foo. And everything operating on (sub-)objects is called Bar-ChildItem. So, the equivalent to ls is Get-ChildItem (although PowerShell also provides builtin aliases ls and dir – in fact, whenever it makes sense, they provide both Unix and CMD.EXE aliases as well as abbreviations (gci in this case)).
But the killer feature IMO is the strongly typed object pipelines. While PowerShell is derived from the Unix shell, there is one very important distinction: in Unix, all communication (both via pipes and redirections as well as via command arguments) is done with untyped, unstructured strings. In PowerShell, it's all strongly typed, structured objects. This is so incredibly powerful that I seriously wonder why noone else has thought of it. (Well, they have, but they never became popular.) In my shell scripts, I estimate that up to one third of the commands is only there to act as an adapter between two other commands that don't agree on a common textual format. Many of those adapters go away in PowerShell, because the cmdlets exchange structured objects instead of unstructured text. And if you look inside the commands, then they pretty much consist of three stages: parse the textual input into an internal object representation, manipulate the objects, convert them back into text. Again, the first and third stage basically go away, because the data already comes in as objects.
However, the designers have taken great care to preserve the dynamicity and flexibility of shell scripting through what they call an Adaptive Type System.
Anyway, I don't want to turn this into a PowerShell commercial. There are plenty of things that are not so great about PowerShell, although most of those have to do either with Windows or with the specific implementation, and not so much with the concepts. (E.g. the fact that it is implemented in .NET means that the very first time you start up the shell can take up to several seconds if the .NET framework is not already in the filesystem cache due to some other application that needs it. Considering that you often use the shell for well under a second, that is completely unacceptable.)
The most important point I want to make is that if you want to look at existing work in scripting languages and shells, you shouldn't stop at Unix and the Ruby/Python/Perl/PHP family. For example, Tcl was already mentioned. Rexx would be another scripting language. Emacs Lisp would be yet another. And in the shell realm there are some of the already mentioned mainframe/midrange shells such as the OS/400 command line and DCL. Also, Plan9's rc.
It's cultural. The Bourne shell is almost 25 years old; it was one of the first scripting languages, and it was the first good solution to the central need of Unix admins. (I.e., a 'glue' to tie all the other utilities together and to do typical Unix tasks without having to compile a damn C program every time.)
By modern standards, its syntax is atrocious and its weird rules and punctuation-as-statement style (useful in the 1970s when every byte counted) make it hard for non-admins to penetrate it. But it did the job. The flaws and shortcomings were addressed by evolutionary improvements in its descendants (ksh, bash, zsh) without having to reconceive the ideas behind it. Admins stuck to the core syntax because, weird as it was, nothing else handled the simple stuff better without getting in the way.
For complex stuff, Perl came along and morphed into a sort of half-admin, half-application language. But the more complex something gets, the more it's seen as an application rather than admin work, so the business people tend to look for "programmers" rather than "admins" to do it, despite the fact that the right kind of geek tends to be both. So that's where the focus went, and the evolutionary improvements to the application capabilities of Perl resulted in...well, Python and Ruby. (That's an oversimplification, but Perl was one of several inspirations for both languages.)
Result? Specialization. Admins tend to think modern interpreted languages are too heavyweight for the things they're paid to do every day. And overall, they're right. They don't need objects. They don't care about data structures. They need commands. They need glue. Nothing else tries to do commands better than the Bourne shell concept (except maybe Tcl, which was already mentioned here); and Bourne is good enough.
Programmers -- who nowadays are having to learn about devops more and more -- look at the limitations of the Bourne shell and wonder how the hell anyone could put up with it. But the tools they know, while they certainly lean towards the Unixish style of I/O and file operations, aren't better for the purpose. I've written things like backup scripts and file renaming one-offs in Ruby, because I know it better than I know bash, but any dedicated admin could do the same thing in bash -- probably in fewer lines and with less overhead, but either way, it'd work just as well.
It's a common thing to ask "Why does everyone use Y when Z is better?" -- but evolution in technology, like evolution in everything else, tends to stop at good enough. The 'better' solution doesn't win unless the difference is viewed as a deal-breaking frustration. Bourne-type scripting might be frustrating to you, but for the people who use it all the time and for the jobs it was meant for, it's always done the job.
A shell language has to be easy to use. You want to type one-time throw away commands, not small programs. I.e., you want to type
ls -laR /usr
not
shell.ls("/usr", long=True, all=True, recursive=True)
This (also) means shell languages don't really care if an argument is an option, a string, a number or something else.
Also, programming constructs in shells are an add-on, and not even always build-in. I.e. consider the combination of if and [ in Bash or Bourne shell (sh), seq for generating sequences, and so on.
Finally, shells have specific needs that you need less, or differently in programming. I.e., pipes, file redirection, process/job control, and so on.
If you know of such a language, please post it as well.
Tcl is one such language. Mainly because it is designed to primarily be a shell interpreter for CAD programs. Here's one hardcore Python programmer's* experience of realising why Tcl was designed the way it was: I can't believe I'm praising Tcl
For me, I've written and have been using and improved Tcl shell (written in Tcl, of course) as my main Linux login shell on my homebrewed router: Pure Tcl readline
Some of the reasons I like Tcl in general has everything to do with the similarity of its syntax to traditional shells:
At its most basic, Tcl syntax is command argument argument.... There's nothing else. This is the same as Bash, C shell or even DOS shell.
A bareword is considered a string. This is again similar to traditional shells allowing you to write: open myfile.txt w+ instead of open "myfile.txt" "w+".
Because of the foundations of 1 and 2, Tcl ends up with very little extraneous syntax. You write code with less punctuation: puts Hello instead of printf("Hello");. When writing programs you don't feel the hurt so much, because you spend a lot of time thinking about what to write. When you use a shell to copy a file you don't think you just type and having to type ( and " and , and ) and ; again and again gets annoying very quickly.
*Note: not me; I'm a hardcore Tcl programmer
Who says they aren't? Take a look at Zoidberg. REPLs (Read Eval Print Loops) make crappy shells because every command must be syntactically correct, and running a program goes from being:
foo arg1 arg2 arg3
to
system "foo", "arg1", "arg2", "arg3"
And don't even get me started on trying to do redirection.
So, you need a custom shell (rather than a REPL) that understands commands and redirection and the language you want to use to bind commands together. I think zoid (the Zoidberg shell) does a pretty good job of it.
These answers inspired me to take over maintenance of the Perl-based shell Zoidberg. After some fixes, it is usable again!
Check out the user's guide or install Bundle::Zoidberg using your favorite CPAN client.
No.
No, scripting languages are probably not suitable for shells.
The problem is the dichotomy between macro languages and, well, everything else.
The shell is in a category with other legacy macro languages such as nroff and m4. In these processors, everything is a string and the processor defines a mapping from input strings to output strings.
Certain boundaries are crossed in both directions in all languages, but it's usually quite clear whether a system's category is macro or, hmm, I'm not aware of an official term ... I will say "a real language".
So sure, you could type in all your commands in a language like Ruby, and it might even be a second-best choice to a real shell, but it will never be a macro language. There is too much syntax to respect. It takes too many quotes.
But the macro language has its own issues when you start programming in it, because too many compromises had to be made to get rid of all that syntax. Strings are typed in with no quotes. Various amounts of magic need to be re-introduced to inject the missing syntax. I did a code-golf in nroff once, just to be different. It was pretty strange. The source code to big implementations in macro languages is scary.
Since both are formally programming languages, what you can do in one, you can do in the other. Actually it is a design emphasis issue. Shell languages are designed for interactive use, while scripting languages aren't.
The basic difference in the design is the storage of data between commands and the scope of variables. In Bash, etc. you have to jump through hoops to store a value (for example, commands like set a='something'), while in languages like Python you simply use an assignment statement (a = 'something'). When using the values in a shell language you have to tell the language that your want the value of the variable, while in scripting languages you have to tell the language when you want the immediate value of the string. This has effects when used interactively.
In a scripting language where ls was defined as a command
a = some_value
ls a*b
(What does a mean? Does this mean some_value * (whatever b is) or do you mean
'a'anystring'b'?. In a scripting language the default is what is stored in memory for a.)
ls 'a*b' Now means what the Unix ls a*b means.
In a Bash-like language
set a=some_value
ls a*b means what the Unix ls a*b means.
ls $a*b uses an explicit recall of the value of a.
Scripting languages make it easy to store and recall values and hard to have a transient scope on a value. Shell languages make it possible to store and recall values, but have a trivially transient scope per command.
I think it's a question of parsing. Shell languages assume by default the normal $ xxx command means you mean a command to run. In Python and Ruby need you to do system("command") or what not.
It's not that they're unsuitable, just that nobody has really done it yet; at least I think so. Rush is an example attempt in Ruby, and Python has IPython or something like that.
You beg the question. Not everyone agrees that shell languages are superior. For one, _Why doesn't
Not long ago a friend asked me how to recursively search his PHP scripts for a string. He had a lot of big binary files and templates in those directories that could have really bogged down a plain grep. I couldn't think of a way to use grep to make this happen, so I figured using find and grep together would be my best bet.
find . -name "*.php" -exec grep 'search_string' {} \; -print
Here's the above file search reworked in Ruby:
Dir['**/*.php'].each do |path|
File.open( path ) do |f|
f.grep( /search_string/ ) do |line|
puts path, ':', line
end
end
end
Your first reaction may be, "Well, that's quite a bit wordier than the original." And I just have to shrug and let it be. "It's a lot easier to extend," I say. And it works across platforms.
Scalability and extensibility? Common Lisp (you can even run CLISP, and possibly other implementations, as a login shell in Unix environments).
For the Windows users, I haven't yet felt the need for PowerShell, because I still use 4NT (now Take Command Console) from JP Software. It is a very good shell with lots of programming abilities. So it combines the best of both worlds.
When you take a look at, for example, IRB (the Ruby interpreter), it must be well possible to extend it with more one-liners to do daily scripted or mass file management and on the minute tasks.

Python vs Bash - In which kind of tasks each one outruns the other performance-wise? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Obviously Python is more user friendly, a quick search on google shows many results that say that, as Python is byte-compiled is usually faster. I even found this that claims that you can see an improvement of over 2000% on dictionary-based operations.
What is your experience on this matter? In which kind of task each one is a clear winner?
Typical mainframe flow...
Input Disk/Tape/User (runtime) --> Job Control Language (JCL) --> Output Disk/Tape/Screen/Printer
| ^
v |
`--> COBOL Program --------'
Typical Linux flow...
Input Disk/SSD/User (runtime) --> sh/bash/ksh/zsh/... ----------> Output Disk/SSD/Screen/Printer
| ^
v |
`--> Python script --------'
| ^
v |
`--> awk script -----------'
| ^
v |
`--> sed script -----------'
| ^
v |
`--> C/C++ program --------'
| ^
v |
`--- Java program ---------'
| ^
v |
: :
Shells are the glue of Linux
Linux shells like sh/ksh/bash/... provide input/output/flow-control designation facilities much like the old mainframe Job Control Language... but on steroids! They are Turing complete languages in their own right while being optimized to efficiently pass data and control to and from other executing processes written in any language the O/S supports.
Most Linux applications, regardless what language the bulk of the program is written in, depend on shell scripts and Bash has become the most common. Clicking an icon on the desktop usually runs a short Bash script. That script, either directly or indirectly, knows where all the files needed are and sets variables and command line parameters, finally calling the program. That's a shell's simplest use.
Linux as we know it however would hardly be Linux without the thousands of shell scripts that startup the system, respond to events, control execution priorities and compile, configure and run programs. Many of these are quite large and complex.
Shells provide an infrastructure that lets us use pre-built components that are linked together at run time rather than compile time. Those components are free-standing programs in their own right that can be used alone or in other combinations without recompiling. The syntax for calling them is indistinguishable from that of a Bash builtin command, and there are in fact numerous builtin commands for which there is also a stand-alone executable on the system, often having additional options.
There is no language-wide difference between Python and Bash in performance. It entirely depends on how each is coded and which external tools are called.
Any of the well known tools like awk, sed, grep, bc, dc, tr, etc. will leave doing those operations in either language in the dust. Bash then is preferred for anything without a graphical user interface since it is easier and more efficient to call and pass data back from a tool like those with Bash than Python.
Performance
It depends on which programs the Bash shell script calls and their suitability for the subtask they are given whether the overall throughput and/or responsiveness will be better or worse than the equivalent Python. To complicate matters Python, like most languages, can also call other executables, though it is more cumbersome and thus not as often used.
User Interface
One area where Python is the clear winner is user interface. That makes it an excellent language for building local or client-server applications as it natively supports GTK graphics and is far more intuitive than Bash.
Bash only understands text. Other tools must be called for a GUI and data passed back from them. A Python script is one option. Faster but less flexible options are the binaries like YAD, Zenity, and GTKDialog.
While shells like Bash work well with GUIs like Yad, GtkDialog (embedded XML-like interface to GTK+ functions), dialog, and xmessage, Python is much more capable and so better for complex GUI windows.
Summary
Building with shell scripts is like assembling a computer with off-the-shelf components the way desktop PCs are.
Building with Python, C++ or most any other language is more like building a computer by soldering the chips (libraries) and other electronic parts together the way smartphones are.
The best results are usually obtained by using a combination of languages where each can do what they do best. One developer calls this "polyglot programming".
Generally, bash works better than python only in those environments where python is not available. :)
Seriously, I have to deal with both languages daily, and will take python instantly over bash if given the choice. Alas, I am forced to use bash on certain "small" platforms because someone has (mistakenly, IMHO) decided that python is "too large" to fit.
While it is true that bash might be faster than python for some select tasks, it can never be as quick to develop with, or as easy to maintain (at least after you get past 10 lines of code or so). Bash's sole strong point wrt python or ruby or lua, etc., is its ubiquity.
Developer efficiency matters much more to me in scenarios where both bash and Python are sensible choices.
Some tasks lend themselves well to bash, and others to Python. It also isn't unusual for me to start something as a bash script and change it to Python as it evolves over several weeks.
A big advantage Python has is in corner cases around filename handling, while it has glob, shutil, subprocess, and others for common scripting needs.
When you writing scripts performance does not matter (in most cases).
If you care about performance 'Python vs Bash' is a false question.
Python:
+ easier to write
+ easier to maintain
+ easier code reuse (try to find universal error-proof way to include files with common code in sh, I dare you)
+ you can do OOP with it too!
+ easier arguments parsing. well, not easier, exactly. it still will be too wordy to my taste, but python have argparse facility built in.
- ugly ugly 'subprocess'. try to chain commands and not to cry a river how ugly your code will become. especially if you care about exit codes.
Bash:
+ ubiquity, as was said earlier, indeed.
+ simple commands chaining. that's how you glue together different commands in a simple way. Also Bash (not sh) have some improvements, like pipefail, so chaining is really short and expressive.
+ do not require 3rd-party programs to be installed. can be executed right away.
- god, it's full of gotchas. IFS, CDPATH.. thousands of them.
If one writing a script bigger than 100 LOC: choose Python
If one need path manipulation in script: choose Python(3)
If one need somewhat like alias but slightly complicated: choose Bash/sh
Anyway, one should try both sides to get the idea what are they capable of.
Maybe answer can be extended with packaging and IDE support points, but I'm not familiar with this sides.
As always you have to choose from turd sandwich and giant douche.
And remember, just a few years ago Perl was new hope. Where it is now.
Performance-wise bash outperforms python in the process startup time.
Here are some measurements from my core i7 laptop running Linux Mint:
Starting process Startup time
empty /bin/sh script 1.7 ms
empty /bin/bash script 2.8 ms
empty python script 11.1 ms
python script with a few libs* 110 ms
*Python loaded libs are: os, os.path, json, time, requests, threading, subprocess
This shows a huge difference however bash execution time degrades quickly if it has to do anything sensible since it usually must call external processes.
If you care about performance use bash only for:
really simple and frequently called scripts
scripts that mainly call other processes
when you need minimal friction between manual administrative actions and scripting - fast check a few commands and place them in the file.sh
Bash is primarily a batch / shell scripting language with far less support for various data types and all sorts of quirks around control structures -- not to mention compatibility issues.
Which is faster? Neither, because you are not comparing apples to apples here. If you had to sort an ascii text file and you were using tools like zcat, sort, uniq, and sed then you will smoke Python performance wise.
However, if you need a proper programming environment that supports floating point and various control flow, then Python wins hands down. If you wrote say a recursive algorithm in Bash and Python, the Python version will win in an order of magnitude or more.
I'm posting this late answer primarily because Google likes this question.
I believe the issue and context really should be about the workflow, not the tools. The overall philosophy is always "Use the right tool for the job." But before this comes one that many often forget when they get lost in the tools: "Get the job done."
When I have a problem that isn't completely defined, I almost always start with Bash. I have solved some gnarly problems in large Bash scripts that are both readable and maintainable.
But when does the problem start to exceed what Bash should be asked to do? I have some checks I use to give me warnings:
Am I wishing Bash had 2D (or higher) arrays? If yes, it's time to realize that Bash is not a great data processing language.
Am I doing more work preparing data for other utilities than I am actually running those utilities? If yes, time again to realize Bash is not a great data processing language.
Is my script simply getting too large to manage? If yes, it is important to realize that while Bash can import script libraries, it lacks a package system like other languages. It's really a "roll your own" language compared to most others. Then again, it has a enormous amount of functionality built-in (some say too much...)
The list goes on. Bottom-line, when you are working harder to keep your scripts running that you do adding features, it's time to leave Bash.
Let's assume you've decided to move your work to Python. If your Bash scripts are clean, the initial conversion is quite straightforward. There are even several converters / translators that will do the first pass for you.
The next question is: What do you give up moving to Python?
All calls to external utilities must be wrapped in something from the subprocess module (or equivalent). There are multiple ways to do this, and until 3.7 it took some effort to get it right (3.7 improved subprocess.run() to handle all common cases on its own).
Surprisingly, Python has no standard platform-independent non-blocking utility (with timeout) for polling the keyboard (stdin). The Bash read command is an awesome tool for simple user interaction. My most common use is to show a spinner until the user presses a key, while also running a polling function (with each spinner step) to make sure things are still running well. This is a harder problem than it would appear at first, so I often simply make a call to Bash: Expensive, but it does precisely what I need.
If you are developing on an embedded or memory-constrained system, Python's memory footprint can be many times larger than Bash's (depending on the task at hand). Plus, there is almost always an instance of Bash already in memory, which may not be the case for Python.
For scripts that run once and exit quickly, Python's startup time can be much longer than Bash's. But if the script contains significant calculations, Python quickly pulls ahead.
Python has the most comprehensive package system on the planet. When Bash gets even slightly complex, Python probably has a package that makes whole chunks of Bash become a single call. However, finding the right package(s) to use is the biggest and most daunting part of becoming a Pythonista. Fortunately, Google and StackExchange are your friends.
If you are looking to cobble together a quick utility with minimal effort, bash is good. For a wrapper round an application, bash is invaluable.
Anything that may have you coming back over and over to add improvements is probably (though not always) better suited to a language like Python as Bash code comprising over a 1000 lines gets very painful to maintain. Bash code is also irritating to debug when it gets long.......
Part of the problem with these kind of questions is, from my experience, that shell scripts are usually all custom tasks. There have been very few shell scripting tasks that I have come across where there is already a solution freely available.
There are 2 scenario's where Bash performance is at least equal I believe:
Scripting of command line utilities
Scripts which take only a short time to execute; where starting the Python interpreter takes more time than the operation itself
That said, I usually don't really concern myself with performance of the scripting language itself. If performance is a real issue you don't script but program (possibly in Python).
I don't know if this is accurate, but I have found that python/ruby works much better for scripts that have a lot of mathematical computations. Otherwise you have to use dc or some other "arbitrary precision calculator". It just becomes a very big pain. With python you have much more control over floats vs ints and it is much easier to perform a lot of computations and sometimes.
In particular, I would never work with a bash script to handle binary information or bytes. Instead I would use something like python (maybe) or C++ or even Node.JS.
Performance wise both can do equally the same, so the question becomes which saves more development time?
Bash relies on calling other commands, and piping them for creating new ones. This has the advantage that you can quickly create new programs just with the code borrowed from other people, no matter what programming language they used.
This also has the side effect of resisting change in sub-commands pretty well, as the interface between them is just plain text.
Additionally Bash is very permissive on how you can write on it. This means it will work well for a wider variety of context, but it also relies on the programmer having the intention of coding in a clean safe manner. Otherwise Bash won't stop you from building a mess.
Python is more structured on style, so a messy programmer won't be as messy. It will also work on operating systems outside Linux, making it instantly more appropriate if you need that kind of portability.
But it isn't as simple for calling other commands. So if your operating system is Unix most likely you will find that developing on Bash is the fastest way to develop.
When to use Bash:
It's a non graphical program, or the engine of a graphical one.
It's only for Unix.
When to use Python:
It's a graphical program.
It shall work on Windows.

Categories

Resources