Learning From The Past
The history of software development is peppered with fads and fashions; things which seem like inevitable progress at the time but are later discarded as flawed dead-ends. Even temporary infatuations leave their marks: legacy software, books, web content, “best practices” and so on. Among all this debris it can often be hard to identify things which have stood the test of time, even when modern ideas go over the same ground.
As an example, let’s consider a current hot topic close to our hearts here at One Beyond: Microservices.
Fred George’s microservices (read the slides) boil down to two essential rules and two guidelines:
- A system must support multiple live versions of every component
- Only one thing must change at at time (add or remove one component, reconfigure one service, etc.)
- the smaller and more independent the components are, the better
- the smaller changes are, and the sooner they are deployed, the better
In traditional application design this is so unusual as to be heretical. Architects keep designing bigger tightly-coupled solutions which need huge amounts of ceremony and require heroic work to stop the whole edifice from catching fire if anything goes wrong (c.f. “towering inferno”)
What may surprise, though, is that we all use this kind of microservice system every day. The World Wide Web.
If we view each web page as a “microservice”, the web matches Fred’s concept very closely, particularly the kind of “static” web pages which are stored separately on a server and delivered to web browsers on demand. Such pages present some information to a user and refer to other pages using hyperlinks; sometimes they also gather some information from a user and process it with some in-page javascript and/or the assistance of other remote services.
The web has always supported multiple versions of web pages on a server. All it takes is a change to a link URL to include a new page in a “web site”. Deploying a page is as simple as saving a file, and the limitations of what people are willing to read tend to make web pages small and understandable.
So why is it that web “applications” are often so clumsy and fragile to develop, test, deploy and understand? Why have these same web apps become a classic example of the problems Fred is trying to address?
In some ways the answer lies in the history of web development. The very first web servers could do nothing but serve pages from files. There was no Javascript, no “server-side” software, only links to other pages. For dissemination and browsing of pre-written information this was fine, but people soon wanted more. For web sites with lots of information it became very labour-intensive to manually mark up everything in HTML, and practically unworkable for any information which changed faster than the time and skills available to write the pages. This became even more of a bottleneck when web pages started gaining style as well as raw information. Changing a logo or header image across many web pages became a fiddly slog, so the people creating these web pages looked for ways to make things easier.
The earliest kind of dynamic pages were built to address these issues, and consisted of two basic technologies “server-side includes” (SSI) and “Common Gateway Interface” (CGI). SSI addressed the issue of common headers on multiple pages, by supporting what have now become known as “partials” (page fragments re-used in multiple places.) CGI was more far reaching in that it allowed, for the first time, web pages to be generated as they were requested, by running a script. A “CGI script” has a very simple interface: the web server sets some environment variables representing the HTTP header, then passes the request body as the input to the script, and returns the output of the script to the client.
CGI was the engine which powered the early days of the dynamic web, and there are still many web sites which rely on this venerable technology. Just like the original static web pages, CGI scripts fit pretty well in the context of Fred’s microservices. A good CGI script does one job (building a single web page) and can be substituted simply by changing a link URL. An application consisting of several static web pages, perhaps with a bit of SSI for common sections, and some CGI scripts to do the hard work has all the characteristics of a microservices deployment.
As an aside it is important to talk about skills. One of the key emergent characteristics of a microservices architecture is that services can be developed using whatever technologies and skills are available and suitable at the time. As long as a service can handle its job, it is unimportant how it is implemented. In turn, if an implementation decision is later seen as inappropriate, the service can be re-implemented without impact on the greater system. This is hugely important to the practical building and maintenance of such systems. When extra development is required, extra people can be brought on to the team and be useful immediately, with whatever skills they already possess. Development (and re-development) can proceed on many services at once, without requiring complex documentation, training, release processes, or meetings.
Up to this point in the history of the web, this technology independence and freedom still held. But the clouds were looming.
The main problem with CGI as a web technology was held to be one of performance. As web pages became more complex, requiring more information from more diverse sources, typical CGI implementations began to feel the strain. CGI-based web application software might need to make several requests to one or more remote databases for every page, as well as running whatever code is required to build the HTML and text on the page. Database access was a particular problem; the independent, stateless, nature of CGI scripts means that a new database connection must be opened and closed for every page. This began to be the major limit on the number of pages which could be served.
To address this problem servers were built which, instead of starting a whole new process to run a CGI script for every page access, started a single long-running process which could hold things such as database connections and popular data in memory for much faster access. Early examples include apache modules and Java servlets. At a stroke this massively improved the performance of the dynamic web but at considerable, and often overlooked, expense in development.
- No longer could a script be implemented and re-implemented at will, instead it had to be compatible with the containing server, which in turn probably meant a very much reduced choice of languages, tools, and frameworks.
- No longer could a CGI script be substituted or upgraded whenever required, instead it required changes to the configuration of the server, which in turn almost always implied a server restart.
- No longer was each script responsible for single job. Code began to be shared, and changes to one component could have unexpected knock-on effects on many others.
The development of web components was now locked into a larger application, with rules about when the server could be restarted to deploy changes, and specific skills needed. This in turn both decreased ease and speed of development, and increased the difficulty of finding and training more developers.
The response of the software industry to this problem has been diverse, but mostly concentrated on attempting to hide or abstract “common” or “difficult” aspects of a system into frameworks and libraries. This has the short term benefit that suitable applications might need a little less code, but at the long-term cost of even more stuff to add to the job spec, more to go wrong and be misunderstood, and the increasingly worrying possibility of discovering that a chosen framework, library, language, or server is no longer cost-effective for the needs of your particular project. Frameworks in particular can act like magnets, pulling at application code and distorting the natural separation of responsibilities until every change involves the framework.
This situation has become so normal now that it is hardly ever challenged. Job advertisements for web development specify a baroque assortment of skills and experience, sometimes down to specific versions of specific languages or frameworks; and project managers the world over complain about both the quality of staff and the pace of development. Deployment of web applications is routinely late, requires huge amounts of testing, and still frustrates and burns out development teams.
Starting a web development project has now become a matter of placing large bets on unsubstantiated guesses on the suitability and productivity of a collection of third-party software. Even people with experience of particular technologies are rarely in a position to know for certain how things will turn out, as no two business needs are the same.
If we want to improve this situation we need to learn from the past, and discard some commonly held assumptions about software development.
- A framework based on a solution to someone else’s problems, however clever and comprehensive it may seem, is never as useful as you expect.
- To speed up development you need systems split into genuinely independent chunks, even at the cost of some duplication, so that multiple people can work without impeding each other.
- Finding and hiring productive software developers is much easier if the project is less prescriptive about technologies, so leave tool choices and “standards” as late as as you can and avoid “lock-in” wherever possible.
- And finally for now, remember that you probably do not need the complex, unwieldy, and expensive solution which might be suggested by “best practices”. Look for simplicity and don’t be afraid of solving your own problems in your own way.
If you keep an eye on these suggestions, you may find that building your own can work out much cheaper than buying in over the length of a typical software system.