Let’s face it. Systems architecture can a dry subject!
But as a technical and business architect who’s been at this a long time, I have witnessed the evolution of enterprise architecture. Good and bad.
As an architect, I’m normally focussed on determining the boundaries between components and the balance to ensure the client gets the best functionality and performance for the maximum value. My last few clients, however, have all had significant issues with their wider architecture.
Here are some of the common symptoms I see:
- Uncontrolled increases in integration costs due to ever more complex approaches.
- Poor performance affecting customer satisfaction.
- The proliferation of P1 errors.
To address them, it’s necessary to look for common causes and a way forward.
In this blog, I wanted to share my observations on the history of enterprise architecture, why we have ended up where we are today, and how we can apply some of the learnings to solve the common issues businesses face today.
So, let’s take a step back in time and look at how we got here and whether the conditions that forced changes in approach are starting to occur again.
Are you sitting comfortably?
The Monolith
From the 1970s, IT was largely confined to certain large enterprises with deep pockets and high customer volumes – such as banks, insurance, telco and government. These companies were the key users of mainframe technology primarily employing bespoke monoliths – single corporate databases – and precisely suited to the client’s needs.
These systems were incredibly successful, and some are still around today and heading for 40 years old!
Why were they successful?
Well, because they were well designed, based on a sensible investment of time and money, and because the businesses themselves were stable.
The first applications
By the late 1980s, more companies were beginning to make use of IT, although this would be word processing and spreadsheets for many, and the first Commercial-off-the-shelf (COTS) applications covering bookkeeping and general ledger were created.
However, those companies seeking a competitive edge worked primarily with turnkey suppliers to build their own monoliths. This was a major event, with the emergence of Oracle and other Relational Database Management System (RDBMS) and productive, if unappealing, UI tools:
- Allowing companies to enter the world of complex systems at a much lower price.
- Providing much-improved productivity, particularly around the ability to make changes.
Medium enterprises could now compete and even beat the dominant large enterprises. This acted as a catalyst for business change. The platform of stability that the mainframe monoliths had depended on was beginning to crumble under the pace of change.
Many of the large companies made some terrible tactical decisions that compromised the future of the monoliths. For example, as subscribers moved from having a phone line to having phone + dial-up internet + email, then it was so easy to just add these as attributes of the phone service rather than move to a many services model. This ‘agile’ of the 1990s is perhaps a lesson that should be closely followed today.
The emergence of COTS & Enterprise Integration
With more companies becoming dependent on IT, the applications market exploded. ERP/Stock Control, CRM, SFA led the charge of the COTS with most applications addressing functionality common to many business and industries.
A problem emerged. This was no longer a turnkey solution for a company but a collection of ‘jigsaw pieces’. The turnkey providers – traditionally developers of software – became system integrators configuring and integrating applications to suit the client needs.
The Wild West
The mid-late 1990s was a Wild West period for IT.
New competing COTS applications entering the market, e.g. at least a dozen ‘mainstream’ CRMs each with differing functionality and the ability to customise to meet your needs.
What followed was a series of horror stories that went something like this:
“UI was unproductive, and users didn’t like it, so we built a new one over the top – it stopped working when we did an upgrade, and so we had to start again.”
“We’d made lots of customisations, but when we upgraded, we had to remove them and then redo them one by one.”
“We made an investment in one application, but then it was bought by a rival, and they said it was being discontinued and we’d have to move to their other application.”
Enterprise Architecture Approach
These painful lessons led to the formation of a ‘common architectural’ approach, characterised by a few key points:
- Applications should focus on best-in-class business components; don’t customise them to try and bend them to do something else because this customisation will prevent you from upgrading.
- Minimise data duplication; it’s hard to keep in step across multiple components.
- Loosely couple applications; avoid point-to-point integration and use an abstracted middleware so that changing a component only affects middleware and not the other components.
Most companies have followed these guidelines for the past 20 years; some still do not have fully integrated systems and data is rekeyed between components because the cost of enterprise integration is so high.
So what’s changed since the 90s?
Well, there’s two big things:
- Most of the assumptions that gave rise to the new enterprise architecture approach turned out to be wrong.
- Exposing your systems and data to the customer – digitalisation – is a big difference. Grumpy internal users are entirely different from disgruntled customers!
So, let’s look at each of those ‘big things’ in turn.
Assumptions. IT has a habit of making assumptions. For example, object orientation was a big thing that will save you millions in the long-term but it didn’t see the advent of COTS. So with integration, it’s no surprise that we got it wrong again. The business components didn’t get swapped out often because:
- They were a significant investment.
- Budgets were tight post year 2000, and it’s a hard sell to try and get even more money to replace the bad decision you made before.
- Business change impact is a significant factor – how do we handle re-training all the users and migrating the data?
- How do we know that another COTS application will be any better?
So loosely coupling rather than point-to-point is a false economy:
- Perversely, the integration technology is much more likely to change because it can be changed easier as it doesn’t directly affect end-users.
- The business components don’t change often and if they are changed – for example, you move to a multi-currency invoicing systems – then the other components and interfaces have to change anyway, i.e.
- The CRM system has to capture the customers preferred currency and pass it to the invoicing system via the middleware.
- We have to change two interfaces rather than one!
If the loose coupling and EAI turned out to be a waste of money, then the second point – digitalisation – is a matter of survival.
Digitalisation means that your 200 internal users (“I’m sorry the system seems unexpectedly slow today.”) are now joined by 200,000 external users who dare to expect to use your systems twenty-four hours a day and, worse still, they expect super response times! The fact that the data is distributed and stitched together by a fundamentally asynchronous integration layer doesn’t look so clever now.
So what’s the answer? Change the integration technology again?
I expect many companies are looking at microservices – smaller transactions so better performance and we can containerise to deliver the scale required.
But, hold on…
Will that improve performance?
Well it will mean lots of small waits rather one big one
Do our business components support this?
Some might. You may want to consider building a new UI that calls micro-services for the customer to deal with the fact that parts of the page will be delivered, if at all, at different times, oh and some systems don’t provide micro-services, so we’ll have to build an operational data store (ODS) and build the micro-services on that; oh and everything will have to be 24×7 so we’ll need redundancy too.
This sounds like a lot of time and money. Are we sure it will work?
Well… no.
Is there an alternative approach?
What about the return of the Monolith? The monolith used database integration so data didn’t need to be moved about.
In the twenty-first century, we have a COTS/Monolith hybrid – the Ecosystem.
The Ecosystem is a shared cloud platform providing a single database and UI. Furthermore, the platform is open to 3rd party application vendors who can use and extend the data model. This allows you to assemble the solution that suits you by taking the core platform and the 3rd party apps you need.
For example, everyone needs a basic CRM (customers, contacts etc) and basic products, widgets, but some need more sophisticated product capabilities such as Configure Price Quote (CPQ) – e.g. you can only buy product X if your buy, or have already bought, product Y. However, there are 3rd party vendors who have already developed highly capable product models and catalogues that you can plug into the main platform, and you don’t need to integrate it.
However, the Ecosystem concept isn’t just a shared data model, it can also expose 3rd party business services – for example, address validation, credit scoring, logistics etc where the 3rd party doesn’t just have software but value-added service – again all pre-integrated.
This is a completely different model and the scale of take-up is driving the 3rd parties to come onboard.
The result is a monolith from the client’s perspective with a single database. From another perspective, however, it is a best-in-class COTS application; constant evolution, and a vast pool of developers and ongoing competition to drive down prices. The addition of the external business services is something we haven’t seen before.
Further, if we look at multi-tenanted cloud platforms, then they have to have the discipline that means all clients are on the same version; upgrades are automatic, and your customisations are preserved. The result exceeds anything that we’ve had from COTS before.
Why Now?
Just like the 1990s when we saw IT move from the large enterprise to include medium enterprises, now we are seeing the shift to small enterprise.
Small enterprise can’t afford developers or dedicated infrastructure so that’s the driver for multi-tenanted cloud platforms with business applications.
The Ecosystem makes this a richer offering for the client, but also for the 3rd party application vendor who can now sell cost-effectively at the scale that small business offers instead of a direct sales and marketing model that would be too expensive.
Is there a good example of who does this?
At the moment the dominant player is Salesforce.com.
There are many cloud platforms, but most are really focussed today on infrastructure and technology. Salesforce is focussed on a business application (sales, marketing, CRM etc.) in the classic COTS style and their Ecosystem – AppExchange, which is used to extend the platform – is a different proposition from Apple Store and Google which drive personal rather than corporate applications and do not share data between apps.
The sheer number of apps is impressive, so if you want to add an ERP module you can, and if you want to send your parcels via UPS you can.
The take-up of Salesforce within small enterprises is high and the consequence is that other, off-platform software vendors or business service providers – even rivals – strive to be part of it just to ensure they don’t miss out from such a large market. A failure to do this means that as the client transitions from small to medium to large enterprise then any further applications they need will have to be compatible with their investment and not involve too much ‘IT’.
For example, if your logistics provider isn’t on Salesforce then the risk is that you’ll change logistics provider rather than work out how to integrate with them – so they’ll make themselves available (although you have to push them!).
Is the Ecosystem approach right for me?
The Ecosystem approach represents a radical shift in architecture and in the way that 3rd parties offer their business services.
If you’re struggling with the problems I mentioned at the start, or you’re considering a full digital stack, then it is worthwhile taking a look, even if it doesn’t cover your entire scope.
There are still best practices to follow and pitfalls to avoid but it is certainly the direction of travel – and remember your nimbler, smaller rivals will already be doing this.
- The Four Horsemen of the Cyber Apocalypse - September 7, 2022
- How vulnerable is the internet? - July 12, 2022
- A brief history of enterprise architecture: lessons we can learn and apply - May 6, 2020
Leave A Comment