Support (24/7) +370 655 26624 · Sales +370 678 03330

From Mainframe to AI Superhive: a brief history of IT infrastructure evolution.

2024-02-27

IT infrastructure used to be something that big companies need. Today it is the backbone of any organization that relies on information technology to operate. It has undergone several major transformations over the decades, as modern technologies emerged and challenged the existing paradigms. Let us travel through time together and explore some of the key milestones in the evolution of it, let us discover how they shaped the IT industry and the way we think about it. 

The Era of Mainframes: Centralized Computing 

The first generation of IT infrastructure was dominated by mainframe computers, which were large, expensive, and powerful machines, that looked nothing like modern computers. Mainframes could handle massive (for that time) amounts of data and perform complex (again, for that time) calculations. Large organizations, such as banks, governments, and universities, used mainframes as the central part of their IT systems. They were accessed by terminals, which were simple interfaces that allowed users to interact with the mainframe through a keyboard and a screen. Mainframes were centralized, meaning that all the data and processing power were in one place, and users had to share the resources of the mainframe. It’s a bit ironic that the modern cloud, while much more distributed, has a similar idea of outsourcing computing power away from the user. 

Mainframes were modus operandi from the 1950s to the early 1980s, and their forte was reliability, scalability, and security. They could run multiple applications simultaneously and handle high-volume transactions with ease. Relatively speaking of course. They also had built-in limited redundancy and backup systems and could protect the data from unauthorized access (so far description fits the modern cloud, right?) limited redundancy and backup systems, and could protect the data from unauthorized access (so far description fits modern cloud, right?). However, they also had drawbacks that are unimaginable today – astronomical cost, low flexibility, and dependency on a single point of failure. They required specialized hardware, software, and a large staff of technicians and operators to maintain them. They also had limited compatibility and interoperability with other systems and could not adapt to the changing needs and preferences of the users. As the demand for IT services increased and the needs of users became more diverse and dynamic, mainframes started to lose their appeal as different approaches emerged.  

The first mainframe computer, ENIAC, was so large that it occupied 167 square meters (about half the area of a tennis court) of floor space and consumed 150 kilowatts of power (that’s no small feat back then and even now you could power small datacenter with that much power). It could perform about 5,000 calculations per second, which is less than a modern calculator powered by potato. It also used 18,000 vacuum tubes, which had to be replaced frequently, and generated a lot of heat and noise (now we replace faulty disk or cooler once every few years and consider it to be an annoying inconvenience). It was built in 1946 and used for military and scientific purposes, such as calculating artillery trajectories and designing hydrogen bombs. Military first recipient of high tech, nothing changed since then. 

Meanwhile, the first commercial mainframe computer, IBM 701, was launched in 1953, and had 19 customers, including the US Air Force, the US Navy, and General Electric It could perform about 17,000 calculations per second, and had a memory of 2 kilobytes, which is equivalent to this sentence of text you are reading. It also used magnetic tapes and punch cards to store and process data and could print 150 lines per minute. It probably looked like lightning speed, considering the only alternative was typewriters. 

The first message sent over the internet was “lo”, which was supposed to be “login”, but the system crashed after typing the first two letters. Yeah, back then engineers had some serious constraints. 

The first email was sent by Ray Tomlinson in 1971, and he used the @ symbol to separate the user name from the hostname. That was more than half a century ago… email is half a century old. Imagine.  

The Era of Servers: Decentralized Computing 

The second generation of IT infrastructure was driven by the emergence of servers, which were smaller, cheaper, and more efficient machines that could perform specific tasks, such as file, web, or database services. Servers were different from mainframes, in that they were decentralized, meaning that each server had its own function, data, and processing power. Servers were connected by local area networks (LANs) or wide area networks (WANs), which were networks that allowed the sharing of resources, such as printers, files, and applications, among several devices in close or distant proximity. 

The first server, Xerox Alto (yes, the printer company), was developed in 1973, and was also considered the first personal computer. It had a graphical user interface, a mouse, a keyboard, and a network connection. It could display text and graphics, and run applications such as email, word processing, and games. It was not sold commercially but was used for research and educational purposes and influenced the design of later computers. 

Tim Berners-Lee used the first web server, NeXT Computer, to start what later became the World Wide Web we know today in 1990. It was a black cube-shaped machine, that had a 25 MHz processor, a 256 MB hard drive, and a 17-inch monitor. It could run Unix and NeXTSTEP operating systems and had built-in networking and audio capabilities. It also had a magneto-optical drive, which was used to store the original copy of the web. Imagine storing the entire internet on the CD. Speaking of CDs… first online purchase was in August 1994; the item bought was a CD of Sting’s album “Ten Summoners’ Tales” for $12.48. And if you think spam is a somewhat recent phenomenon the first email spam was sent by Gary Thuerk, a marketing manager at Digital Equipment Corporation (DEC), on May 3, 1978. He sent an unsolicited message to 393 recipients on ARPANET, the precursor to the internet, promoting a new product. He was reprimanded for his act, but also generated some sales 🙂 

But let’s not get carried away. The introduction of the microprocessor led to the development of smaller, more affordable servers. By the late 1980s and early 1990s, the rise of the Internet and the World Wide Web necessitated more powerful and networked servers, leading to the adoption of client-server architectures. Servers had several advantages over mainframes, such as accessibility, versatility, and productivity. They could run multiple operating systems and applications and support several types of devices and platforms. They also had exponentially lower costs and maintenance requirements and could be easily upgraded and replaced. However, they also had some drawbacks, such as inconsistency, vulnerability, and isolation. They had different standards and protocols and could not communicate seamlessly with each other. They also had more exposure to security threats, such as viruses, hackers, and natural disasters. They also had limited scalability, availability and could not handle the increasing amount of data and the complexity of applications. As the need for collaboration and integration became more evident, servers started to face some limitations. A new generation of IT infrastructure was needed, emerging virtualization while increased flexibility significantly was only a stepping stone for what came next. 

The Era of the Virtualization and Cloud: Distributed Computing 

The third generation of IT infrastructure was enabled by the advent of the cloud, which was a model that provided on-demand access to a pool of shared resources, such as servers, storage, and software, over the internet. The cloud allowed the creation of web applications, which were applications that ran on web servers and could be accessed by web browsers. Web applications were distributed, meaning that the data and processing power were spread across multiple locations (sometimes continents), and users could access them from anywhere and anytime. Web applications were supported by cloud computing services, which were categorized into three types: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). 

The concept of cloud computing was inspired by the symbol of a cloud, which was used to represent the internet in network diagrams. The term cloud computing was first used by George Favaloro, a Compaq executive, in 1996, in an internal document that described a vision of internet-based computing. 

The cloud started gaining traction in 2006 when Amazon launched AWS and popularized the term. The 2006 AWS iteration had S3 and EC2 instances on offer, which was the core of its services, along with SQS service launched in 2004. Cloud had several distinct advantages, such as scalability, availability, and mobility. It could offer unlimited and elastic resources and handle high-volume and high-velocity data and applications. It could also ensure high reliability and redundancy and recover from failures and disasters faster. However, early cloud also had some drawbacks, such as latency, complexity, and heterogeneity. It could suffer from network delays and congestion (network throughput or redundancy was nowhere near the current state) and affect the performance and quality of service congestion also involve multiple layers and components and increase the difficulty and cost of management and maintenance. Different clouds also had different standards and architectures and created compatibility and interoperability issues. Cloud services were more expensive for larger deployments with stable loads, so the servers and private clouds constructed of virtualized servers did not give up the spotlight easily and are still popular to this day. However, a new way of thinking started to emerge. 

The Era of DevOps: Automated Computing 

The fourth generation of IT infrastructure is the first generation that has is more cultural rather than change of software or physical layers. It was driven by the emergence of devops in 2010, which is a culture and a set of practices that aim to bridge the gap between development and operations, and to deliver software faster and more reliably. Devops relies on tools and techniques that automate and optimize the processes of software development, testing, deployment, and monitoring. Devops enables the creation of microservices, which are small, independent, and modular services that communicate with each other through APIs. Microservices are agile, meaning that they can be developed, deployed, and updated independently and frequently. 

Devops once again brought significant advantages, such as speed, quality, and feedback. It has shortened the software development lifecycle and enabled continuous delivery and integration. It significantly improved the software quality and reliability and reduced the number of errors and defects. Devops required a shift in mindset and behavior, and a willingness to embrace change and uncertainty. It also required an elevated level of technical and soft skills, and a constant learning and improvement. Then a sleeping giant woke up. 

The Era of AI: Intelligent Computing 

The fifth generation of IT infrastructure is enabled by the advancement of artificial intelligence, which is the ability of machines to perform tasks that normally require human intelligence, such as reasoning, learning, and decision making. The AI did not obsolete Devops or physical layer of IT infrastructure, but opened entirely new realm of possibilities starting with smart applications, which are applications that can adapt to the context and the behavior of the users, and provide personalized and proactive services. Smart applications are cognitive, meaning that they can understand, analyze, and generate natural language, images, and sounds. Smart applications are powered by machine learning, which is a branch of AI that enables machines to learn from data and improve their performance without explicit programming. 

AI, while not a new concept and was toyed around for decades seriously started gain traction in 2020. The first GPT (GPT-1) was released in 2018 and exploded in June of 2020 with the release of GPT-3. It allowed us to automate and optimize the tasks and processes and reduce human intervention and errors. It could also detect patterns and anomalies, but the most prominent advantage was that it created the groundwork for innovation and experimentation thus allowing easier discovery of new insights and opportunities. So here we are. Initial stages of the new era with some serious handicaps in areas such as ethics, trust, and explainability but new era, nevertheless.  

Fun facts: While we perceive AI as something cutting-edge the first AI program, Logic Theorist, was developed by Allen Newell, Herbert Simon, and Cliff Shaw in 1955. It could prove mathematical theorems using symbolic logic, and it found a more elegant proof for one of the theorems in Principia Mathematica, a landmark work by Bertrand Russell and Alfred North Whitehead. Also, the first smart application, ELIZA, was created by Joseph Weizenbaum in 1966, and was one of the first natural language processing programs. It could simulate a conversation with a psychotherapist and respond to the user’s input by using pattern matching and substitution. It was designed as a parody of the superficiality and emptiness of human communication, but some users took it seriously and confided their personal problems to it. Yeah, deep fakes are not entirely a new challenge as you see. 

Conclusion: The Future of IT Infrastructure 

IT infrastructure is not static, but dynamic. It evolves and adapts to the changing needs and expectations of the users and the environment. IT infrastructure is not only a technical matter, but also a social and cultural one. It reflects and affects the values and visions of the people who create and use it. IT infrastructure is not only a means, but also an end. It shapes and enables the possibilities and opportunities of the future.  

As IT professionals, we have the privilege and the responsibility to be part of this fascinating and challenging journey. We have the opportunity and the duty to learn from the past, to contribute to the present, and to envision the future. We have the power and the obligation to create and use IT infrastructure that is not only functional, but also meaningful. We have the role and the mission to make it not only a tool, but also a story. 

IT Infrastructure evolution at the glance:

 

Time Period 

Characteristics 

Tech 

Past (1950s-1990s) 

Mainframe, client-server, centralized, hierarchical, proprietary, hardware-oriented 

IBM 360, VAX, DECnet, SNA, SQL, UNIX, TCP/IP, (90s+ Linux, HTML, HTTP, Windows, Java, PHP, Python) 

Modern (2000s-2020s) 

Cloud, web, distributed, networked, open, software-oriented 

AWS, Azure, GCP, SSD, Android, 3G 

Emerging (2020s-2030s)

AI, edge, autonomous, adaptive, smart, data-oriented 

TensorFlow, PyTorch, IoT, 5G, blockchain, quantum computing