The emergence of object-oriented programming, abbreviated as OOP, from about the second half of the 1980s was a real technological revolution. It was literally a revolution, I will explain why. Before OOP, structured programming languages were popular. And programmers were quite happy to write programs in high-level structured languages, because at the time this was also a huge step forward.

The fact is that the computer was created only after the titanic efforts of such geniuses as Alan Turing, who developed his theory, the Turing machine, on the basis of which all numerical computers work today. The principle of the Turing machine, in short, is that the RAM stores a sequence of microprocessor instructions, including conditional or unconditional jumps to other instructions. In assembly language these transitions are called JMP (jump) and in high-level languages they are called GOTO (go to).

The language was originally Assembler, which is almost exactly the same as the microprocessor. Theoretically any program can be written in Assembler, but in practice it is not so easy to port application tasks abstractions to it.

For programming applications, from about the beginning of the 70’s of the 20th century came structure programming, which required the efforts of other geniuses, such as Niklaus Wirth, creator of the Pascal language, and Edsger Dijkstra, who was the first to write about the need to get rid of the GOTO operator in high-level languages, and proposed the solution to do this with three types of operators and functions.

In practice, this resulted in programming languages such as Basic, C, Pascal, Algol, Cobol, Fortran, and PL1. The development of programs by the method “from top to bottom” in structural programming became a sheer pleasure. It consisted in writing a set of functions containing subfunctions, which could be called by putting necessary data at the input and getting the corresponding result.

Thus, in structured programming languages, algorithms based on functions are as if in the first place, and data for them can be taken from anywhere. The idea of cybernetics author Norbert Wiener about a function as a black box to which you can feed any data and observe the resulting output played a role in this.

For small problems, such as sorting data or finding the shortest path, structured programming suited it perfectly. The solutions for most complex algorithmic problems were found. Fundamental works appeared, such as Donald Knuth’s multi-volume The Art of Programming, which is still considered a handbook for programmers.

However, the resulting increase in the complexity of programs also increased the chance of introducing errors into programs, as the ability to substitute any data for procedures and functions had side effects. For example, in 1999, NASA’s Mars Climate Orbiter crashed due to a program error, where the wrong data was substituted.

As a result, a new concept of object-oriented programming emerged, which focuses on what I call the principle of data relevance, and functions become a kind of appendage to the data they have to handle. An object is, first of all, a set of data with its functions. The OOP introduces restrictions on the access of functions to “alien” data, which reduces the possibility of unintentional data changes and dramatically increases the reliability of programs.

After the appearance of object-oriented programming languages such as C++, Object Pascal, Java, C#, as well as new hardware capabilities of computers, the volume of programs and data for them have increased many times, if not by orders of magnitude, which is easy to assess at least by the volume of software distributions, which no longer fit on floppy disks first, and then on CDs. And programming has once again sort of risen from its head to its feet.