October 3, 2010

Ray Lane and the integration of software and consulting at Oracle

Oracle pretty much doubled revenue every year until it got around the $1 billion level. Then things got tougher, industry-standard revenue recognition scandals not excepted. At one point there were only three buildings on the Oracle campus, with large portions of them eerily empty. But the ship righted itself, best exemplified by three transitions:

Political battles still raged at Oracle — Mike Fields vs. Craig Conway, Terry Garnett vs. Jerry Baker, and later on Mark Benioff vs. pretty much everybody. But the company was ready to move to next level. Read more

July 25, 2010

Ingres history

Roland Bouman reminded us on Twitter of an old post I did on another blog about Ingres history, the guts of which was:

Ingres and Oracle were developed around the same time, in rapidly-growing startup companies. Ingres generally was the better-featured product, moving a little earlier than Oracle into application development tools, distributed databases, etc., whereas Oracle seems to be ahead on the most important attributes, such as SQL compatibility — Oracle always used IBM’s suggested standard of SQL, while Ingres at first used the arguably superior Quel from the INGRES research project. Oracle eventually pulled ahead on superior/more aggressive sales and marketing.

Then in the 1990s, Ingres just missed the DBMS architecture boat. Oracle, Informix, Microsoft, and IBM all came out with completely new products, based respectively on Oracle + Rdb, Informix + a joint Ingres/Sequent research project, Sybase, and mainframe DB2. Ingres’s analogous effort basically floundered, in no small part because they made the pound-wise, penny-foolish decision to walk away from a joint venture research product they’d undertaken with innovative minicomputer vendor Sequent in the Portland, OR area.

Computer Associates bought Ingres in mid-1994, and immediately brought me in to do a detailed strategic evaluation. (Charles Wang telephoned the day the acquisition closed, in one of the more surprising phone calls I’ve ever gotten, but I digress … Anyhow, the relevant NDA agreements, legal and moral alike, have long since expired.) There was nothing terribly wrong with the product, but unfortunately there was nothing terribly right either. Aggressive investment — e.g., to get fully competitive in parallelism and object/relational functionality, the two biggest competitive differentiators in those days — would have been no guarantee of renewed market success.

Notwithstanding the economic question marks, CA surprised me with its enthusiasm for taking on these technical challenges. But another problem reared its head — almost all the core developers left the company. (If you weren’t willing to sign a noncompete agreement that was utterly ridiculous in those days, at least in the hot Northern California market, you couldn’t keep your job post-merger.) And so, like almost all CA acquisitions outside of the system management/security/data center areas, Ingres fell further and further behind the competition.

Some of the same information made it into my post here on Ingres history later the same year, but for some reason not all did.

June 5, 2010

David Childs

Talking to Algebraix reminded me that David Childs is still alive and kicking. I only ever encountered Childs once, in the early/mid-1980s, when he was pushing his company Set Theoretic Information Systems. The main customer example for STIS was General Motors, for which he had achieved a remarkable amount of database compression. It was something like 4-5X, if I recall correctly, but for 1983 or whatever that was pretty darned good. The idea was to replace data by partitioning according to shared values. E.g., you didn’t store whether cars were red, blue, or green; instead, you stored records about all the red cars in one place, the blue cars in another, and so on. There was also some set-theoretic mumbo-jumbo, but I never figured out what it had to do with implementing anything.

Comshare — a BI vendor before anybody called it BI — did actually build a DBMS based on Childs’ ideas, as Ron Jeffries reminds us. It was relational. Eventually, if I recall correctly, it was swapped out for Essbase (the original MOLAP product, now owned by Oracle).

What Childs really focuses on, however, seems to be “Extended Set Theory.” (This was brought to my attention by Algebraix, even though Algebraix doesn’t actually use many of Childs’ ideas.) And he’s been doing it for a long time. Way back in 1968, Childs wrote a paper outlining how set theory, relations, and tuples could be applied to data management.

And that’s where I did a double-take, because 1968 < 1970. Sure enough, Footnote #1 in Codd’s seminal paper is to Childs’ 1968 work. Indeed, Childs’ paper is the only predecessor Codd acknowledges as having significant portions of his idea.

I’m far from convinced that “Extended set theory” has much to offer versus the standard relational model. But that debate quite aside — Childs’ original achievement doesn’t get the credit it deserves.

April 2, 2010

Those who forget history are doomed to believe it is recurring

The top PostgreSQL-related April Fool’s joke this year, which seems to have successfully pranked at least a few people, was that Postgres is dropping SQL in favor of an alternative language QUEL.

Folks, QUEL was the original language for Postgres. And Ingres. And, more or less, Teradata.*  I’d guess Britton-Lee too, but I don’t recall for sure.

*Once upon a distant time, when I was a cocky young stock analyst, I explained to Phil Neches, chief scientist of Teradata, just why it was a really good business idea to drop T-QUEL for SQL. I doubt he was convinced quite on that day, more’s the pity.

March 30, 2010

No-fooling: A new blog-tagging meme

On April Fool’s Day, it is traditional to spread false stories that you hope will sound true. Last year, however, I decided to do the opposite – I posted some true stories that, at least for a moment, sounded implausible or false. This year I’m going to try to turn the idea into a kind of blog-tagging meme.*

*A blog-tagging meme is, in essence, an internet chain letter without the noxious elements.

Without further ado, the Rules of the No-Fooling Meme are:

Rule 1: Post on your blog 1 or more surprisingly true things about you,* plus their explanations. I’m starting off with 10, but it’s OK to be a lot less wordy than I’m being. 😉 I suggest the following format:

*If you want to relax the “about you” part, that’s fine too.

Rule 2: Link back to this post. That explains what you’re doing. 🙂

Rule 3: Drop a link to your post into the comment thread. That will let people who check here know that you’ve contributed too.

Rule 4: Ping 1 or more other people encouraging them to join in the meme with posts of their own.

Hopefully, the end result of all this will be that we all know each other just a little bit better! And hopefully we’ll preserve some cool stories as well.

To kick it off, here are my entries. (Please pardon any implied boastfulness; a certain combustibility aside, I’ve lived a pretty fortunate life.)

I was physically evicted by hotel security from a DBMS vendor’s product announcement venue. It was the Plaza Hotel in NYC, at Cullinet’s IDMS/R announcement. Phil Cooper, then Cullinet’s marketing VP, blocked my entrance to the ballroom for the main event, and then called hotel security to have me removed from the premises.

A few years later, the same Phil Cooper stood me up for a breakfast meeting in his own house in Wellesley. When one’s around Phil Cooper, weird things just naturally happen. Read more

March 28, 2010

Software industry hijinks

The approach of April Fool’s Day has me thinking of software industry pranks and other hijinks. Most of what comes to mind is verbal jousting of various sorts that doesn’t really fit the theme. But there was one case in which ongoing business competition got pretty prankish: mainframe-era accounting software leaders MSA vs. McCormack & Dodge. Read more

July 2, 2009

Historical significance of TPC benchmarks

In case you missed it, I’ve had a couple of recent conversations about the TPC-H benchmark.  Some people suggest that, while almost untethered from real-world computing, TPC-Hs inspire real world product improvements.  Richard Gostanian even offered a specific example of same — Solaris-specific optimizations for the ParAccel Analytic Database.

That thrilling advance notwithstanding, I’m not aware of much practical significance to any TPC-H-related DBMS product development. But multiple people this week have reminded me this week the TPC-A and TPC-B played a much greater role spurring product development in the 1990s.  And I indeed advised clients in those days that they’d better get their TPC results up to snuff, because they’d be at severe competitive disadvantage until they did.

It’s tough to be precise about examples, because few vendors will admit they developed important features just to boost their benchmark scores. But it wasn’t just TPCs — I recall marketing wars around specific features (row-level locking, nested subquery) or trade-press benchmarks (PC World?) as much as around actual TPC benchmarks. Indeed, Oracle had an internal policy called WAR, which stood for Win All Reviews; trade press benchmarks were just a subcase of that.

And then there’s Dave DeWitt’s take. Dave told me yesterday at SIGMOD that it’s unfortunate Jim Gray-inspired debit/credit TPCs won out over the Wisconsin benchmarks, because that led the industry down the path of focusing on OLTP at the expense of decision support/data warehousing. Whether or not the causality is as strict as Dave was suggesting, it’s hard to dispute that mainstream DBMS met or exceeded almost all users’ OTLP performance needs by early in this millenium. And it’s equally hard to dispute that those systems* performance on analytic workloads, as of last year, still needed a great deal of improvement.

*IBM’s DB2 perhaps excepted. And I say “last year” so as to duck the questions of whether Exadata finally solved Oracle’s problems and whether Madison will once Microsoft releases it.

October 2, 2008

A bit of DB2 history, per IBM

I meant to put up a longer post some months back, reproducing some of the 25th anniversary DB2 history IBM provided, courtesy of Jeff Jones and his team. Seems I didn’t get around to it. Maybe later.

Anyhow, I ran across the following concise info, from a January, 2003 web page posted by (who else?) Jeff Jones: Read more

September 15, 2008

Database machines and data warehouse appliances – the early days

The idea of specialized hardware for running database management systems has been around for a long time. For example, in the late 1970s, UK national champion computer hardware maker ICL offered a “Content-Addressable Data Store” (or something like that), based on Cullinane’s CODASYL database management system IDMS. EDIT: See corrections in the comment thread. (My PaineWebber colleague Steve Smith had actually sold – or at least attempted to sell – that product, and provided useful support when Cullinane complained to my management about my DBMS market conclusions.) But for all practical purposes, the first two significant “database machine” vendors were Britton-Lee and Teradata. And since Britton-Lee eventually sold out to Teradata (after a brief name change to ShareBase), Teradata is entitled to whatever historical glory accrues from having innovated the database management appliance category.

Read more

May 27, 2008

Wikipedia on Cullinet and my comments on same

Wikipedia’s current article on Cullinet is long, detail-laden, and slanted. The difficulties are not of the sort to be fixed with my usual pinpoint Wikipedia edits. So I’ll just reproduce it here, commenting as I go. As for copyright — this particular post is as GPLed as it needs to be to comply with Wikipedia’s copyleft rules. All other rights remain reserved.

The company was originally started by John Cullinane and Larry English in 1968 as Cullinane Corporation. Their idea was to sell pre-packaged software to mainframe users, which was at that time a new concept in an era when enterprises only used internally developed applications or the software that came bundled with the hardware.

Actually, Applied Data Research got there first. Read more

← Previous PageNext Page →

Feed including blog about software history Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.