JS: Asynchronous programming
Theory: Ordering asynchronous operations
Asynchronous programming helps to use computing resources efficiently. But it creates difficulties where previously it had been easier. First of all, it concerns the flow.
Imagine we have the task of reading the contents of two files and writing them to a third file (merging files):
The whole task boils down to performing three operations one by one since we can only write a new file when we've read the data from the first two.
There's only one way to arrange this kind of code. Each subsequent operation must run inside the previous one's callback. Then we build the call chain:
In actual programs, the number of operations may be much more. You could end up with dozens of callbacks, featuring dozens of nested calls.
This property of asynchronous code is often called Callback Hell because of the large number of nested callbacks, making analyzing programs challenging. Someone has even made a website http://callbackhell.com/, which deals with this problem and provides the following code:
In some cases, we don't know in advance how many operations we will perform. For example, you may need to read the contents of a directory and see who owns each file (its uid). If the code were synchronous, our solution would look like this:
Any sequential code is pretty straightforward. Each successive line executes after the previous one finishes, and each element is guaranteed to be processed sequentially in map.
But asynchronous code isn't that obvious. As we discussed, reading the directory is an operation that we do anyway. But how do we define the way the files are analyzed? There can be any number of them. Unfortunately, without ready-made abstractions that simplify this task, we can end up with a lot of complicated code. It would be so complicated, that it's better never to do so in real life.
This code is for educational purposes only:
Let's observe the general principle.
First, we form a special function readFileStat, which is recursively called and passes itself to the function stat. With each new call, this function processes one file and reduces the items array, which contains unprocessed files. As a second parameter, it collects the result, which at the end is passed to the callback cb given by the second argument of the getFileOwners function.
The example above implements an iterative process built on recursive functions. To better understand the code above, try copying it to your computer and running it with different arguments, setting up the debug output inside it beforehand.

