This post is part of a short series on the history of analytics, covering:
- Historical notes on analytics — the pre-computer era
- Historical notes on analytic terminology (in which many terms used in this post are defined)
- Historical notes on analytics — departmental adoption (this post)
What set off my “history of analytics” posting kick is, simply put:
- Most interesting analytic software has been adopted first and foremost at the departmental level.
- People seem to be forgetting that fact.
In particular, I would argue that the following analytic technologies started and prospered largely through departmental adoption:
- Fourth-generation languages (the analytically-focused ones, which in fact started out being consumed on a remote/time-sharing basis)
- Electronic spreadsheets
- 1990s-era business intelligence
- Fancy-visualization business intelligence
- Predictive analytics
- Text analytics
- Rules engines
If we leave out data management/system technologies* — e.g. data warehouse appliances or Hadoop — that’s pretty much everything that succeeded (and a couple that perhaps didn’t). I don’t know what to put on an “Enterprise-wide from the get-go” list except for a couple of duds like executive information systems and balanced scorecards.
*”System software” technologies such as DBMS often do eventually fall under the purview of central IT. But even for them there’s typically a multi-year period during which departments take the initiative in bringing them in.
Of course, this should surprise nobody; information technology is almost always adopted departmentally first, with the exceptions arising mainly in cases where departmental adoption makes no sense. Reasons include:
- The problem being solved is department-specific.
- The expertise and specific techniques to solve the problem are (or seem) subject/department-specific.
- The budget to solve the problem is department-specific.
- The best reasons to centralize technology often involve integration among departments, and new technology is rarely expected to start out being all that integrated.
Departments most likely to be early adopters (relative to others) of analytic technology seem to be:
- Finance/planning, especially in the old days when analytic technology was newer (nowadays finance might be more involved in trying to push reporting/analysis discipline out to a whole company).
- Sales/marketing, because they often have more data than other departments (actual purchase transactions, other customer contacts, and also a lot of external data).
- Investment research, because financial analysis is almost literally their core product. (Ditto trading, for very similar reasons.)
Three examples, for me, serve to bring all this home.
Business PC use famously started with individuals and departments just acquiring PCs, outside of the IT department’s control or even knowledge, way back in the day of the Apple II. Most commonly, the reason to get the PC was to run an electronic spreadsheet, generally VisiCalc.
10-15 years ago, when business intelligence vendors banged the drum for enterprise-wide BI/dashboard adoption, I’d ask them “So, do you have an enterprise-wide dashboard yourselves?” Invariably, they didn’t — but they did have departmental dashboards for sales and/or marketing. It became clear that this was a general pattern in BI adoption.
Multiple generations of technologies that one might think of as having to do with artificial intelligence – e.g. expert systems, predictive analytics* and text analytics — have wound up with applications being concentrated in the same few areas:
- Scientific/engineering research
- National security/law enforcement/anti-fraud
- Underwriting/investing/risk assessment
- Publishing (for the text-oriented technologies)
Those categories comprise 90%+ of the applications I can think of for the golly-gee-whiz technologies of their day. (You could add simulation to the list as well.) And outside of the publishing and criminal-catching sectors, those apps are pretty departmental in nature.
*I think predictive analytics has evolved into a blend of statistics and (other) machine learning, and machine learning can be viewed as a kind of AI.
So why do I think you should care about all this? Two reasons:
- History is cool.
- It has relevance to current issues in analytic technology adoption, which I plan to write more about soon.
I hope you agree.