Amazon moving off Oracle? #DBfreedom

A bunch of news stories, apparently coming off an article in The Information, are talking about Amazon and Salesforce attempting to move away from the use of Oracle.  I’m not going to comment specifically on Amazon, or Salesforce, and any attempt to move away from Oracle’s database.  But on that general topic.  And a little on Amazon (Web Services) in databases.

tl;dr It might not be possible to completely migrate off of the Oracle database, but lots of companies are capping their long term Oracle cost exposure.

There are a ton of efforts out there to make it easier for customers to move off of the Oracle database.  The entire PostgreSQL community has had making that possible as a key priority for many years.  There are PostgreSQL-derivatives like Enterprise DB’s Postgres Advanced Server that go much further than just providing an Oracle-equivalent.  They target direct execution of ported applications by adding PL/SQL-compatibility with its SPL, support for popular Oracle pre-supplied packages, offering an OCI connector, and other compatibility features.  Microsoft started a major push on migrating Oracle applications to SQL Server back in the mid-2000s with SQL Server Migration Assistant.  They re-invigorated that effort last year.  IBM has a similar effort for DB2, which includes its own PL/SQL implementation.  And, of course, the most talked about effort the last few years is the one by AWS.  The AWS Database Migration Service (DMS) and Schema Conversion Tool (SCT) have allowed many applications to be moved off of Oracle to other databases.  Including to Aurora MySQL, Aurora PostgreSQL, and Redshift which, take advantage of the cloud to provide enterprise-level scalability and availability without the Oracle licensing tax.

Note that Andy isn’t specifically saying 50K migrations off of Oracle, that’s the total number for all sources and destinations.  But a bunch of them clearly have Oracle as the source, and something non-Oracle as the destination.

On the surface the move away from Oracle database is purely a balance between the cost of switching technologies and the cost of sticking with Oracle.  Or, maybe in rare cases, the difficulty achieving the right level of technological parity.  But that isn’t the real story of what it takes to move away from Oracle.

Sure many apps can be manually moved over with a few hours or days of work.  Others can be moved pretty easily with the tooling provided by AWS or others, with days to weeks of work.  The occasional really complex app might take many person-months or person-years to move.  But if you have the source code, and you have (or can hire/contract) the expertise, you can move the applications.  And people do.  A CIO could look at spending say $5 Million or $25 million or $100 million to port its bespoke apps and think they can’t afford it.  Or they could look at that amount and say “ah, but then I don’t have to write that big check to Oracle every year”.  So if you think long-term, and hate dealing with Oracle’s licensing practices (e.g., audits, reinterpreting terms when it suits them, inviting non-compliance then using it to force cloud adoption, etc.), then the cost to move your bespoke applications is readily justified.  So what are the real barriers to moving off Oracle database?

Barrier number one is 3rd party applications.  Sometimes these aren’t a barrier at all.  Using Tableau?  It works with multiple database engines, including Amazon Redshift, PostgreSQL, etc.  Using ArcGIS?  It just so happens that PostgreSQL with the PostGIS extension is one of the many engines it supports.  Using Peoplesoft?  Things just got a bit more difficult.  Because Peoplesoft supported other database systems when Oracle acquired it there are options, but they are all commercial engines (e.g., Informix, Sybase, and of course Microsoft SQL Server) and I don’t know how well Oracle is supporting them for new (re-)installations.  You can’t move to an open source, or open source compatible, engine.  If you are using Oracle E-Business Suite?  You’re screwed, you can’t use any database other than the Oracle database.   Given that Oracle has acquired so many applications over the years, there is a good chance your company is running on some Oracle-controlled application.  And they are taking no steps to have their applications support any new databases, not even the Oracle-owned MySQL.

Oracle’s ownership of both the database and key applications has created a near lock-in to the Oracle database.  I say “near” because you can in theory move to a non-Oracle application, and may do so over time.  But when you’ve lived through stories of companies spending $100s of millions to implement ERP and CRM solutions, the cost of swapping out E-Business Suite or Siebel makes it hard to consider.  Without that, there goes complete elimination of your Oracle database footprint.

Now on to the second issue, Oracle’s licensing practices.  I’m not an Oracle licensing expert, so I will apologize for the lack of details and potential misstatements.  But generally speaking, many (if not most) customers have licensed the Oracle database on terms that don’t really allow for a reduction in costs.  Let’s say you purchased licenses and support for 10,000 cores.  You are now only using 1000 cores.  Oracle won’t allow you to just purchase support for 1000 cores, if you want support you have to keep purchasing it for the total number of core licenses you own.  And since they only make security patches available under a support contract, it is very hard to run Oracle without purchasing support.  If you have an “all you can eat” type of agreement, to get out of it you end up counting all the core licenses you currently are using.  You can then stop paying the annual “all you can eat” price, but you still have to pay for support for all the licenses you had when you terminated the “all you can eat” arrangement.  Even if you are now only using 1 core of Oracle.

To top it off, you can see how these two interact.  Even if just one third-party application keeps you using the Oracle database, you will be paying them support for every Oracle license you ever owned. Completely getting off Oracle requires a real belief that the short to mid-term pain is worth the long-term gain.

So does this “get off Oracle” thing sound hopeless?  NO.  For any healthy company, the number of cores being used grows year after year.  It doesn’t matter if you have an “all you can eat” agreement, all you are doing is committing yourself to an infinite life of high support costs.  What moving the moveable existing apps, and implementing new apps on open source/open source-compatible engines, allows you to do is stop growing the number of Oracle cores you license.  You move existing applications to PostgreSQL (or something else) to free up Oracle core licenses for applications that can’t easily be moved.  You use PostgreSQL for new applications, so they never need an Oracle core license.  You can’t eliminate Oracle, but you can cap your future cost exposure.  And then at some point you’ll find the Oracle core licenses represent small enough part of your IT footprint that you’ll be able to make the final push to eliminate them.

Switching topics a little, one of the most annoying things about this is the claim in some of the articles that Amazon needs to build a new database.  Hello?  AWS has created DynamoDB, RedShift, Aurora MySQL, and Aurora PostgreSQL, Neptune, and a host of other database technologies.  DynamoDB has roots in the NoSQL-defining Dynamo work, which predates any of this.  Amazon has a strong belief in NoSQL for certain kinds of systems, and that is reflected in the stats from last Amazon Prime Day.  DynamoDB handled 3.4 trillion requests, peaking at 12.9 million per second.  For those applications that want relational, Aurora is a great target for OLTP and RedShift (plus Redshift Spectrum, when you want to divorce compute and storage) for Data Warehousing.  You think the non-AWS parts of Amazon aren’t taking advantage of those technologies as well?  Plus Athena, Elasticache, RDS in general, etc.?  Puhleeze.

This entry was posted in Amazon, Aurora, Computer and Internet, Database, Microsoft, SQL Server and tagged , , . Bookmark the permalink.

2 Responses to Amazon moving off Oracle? #DBfreedom

  1. Bob - Former Decie says:

    I have supported a home-grown DBMS on the PDP-11, used CINCOM’s Total, and Dec’s CODASYL database, and had been a user and firm believer in RDB and SQL Server in the past. I now think an open-source DB with good support is the way to go for most businesses. YMWV.

  2. Yuri Budilov says:

    Hi Hal,

    All agreed except for the part of AWS building a new database. LOL 8^)
    May be not new, but at least fix the existing engine? And how about ability to load/unload from AWS S3 please ? 8^)

    Below is not a criticism, but my personal observation as a recent AWS RDS PostgreSQL user.

    I have been working with PostgreSQL 9.6 RDS (not Aurora) for last few months and prior to that spoiled rotten by Microsoft SQL Server for last 20 years (thanks Hal & Co !!!). So my opinion is biased but I still think that the RDBMS engine of PostgreSQL leaves much to be desired. Some of it may be fixed by Aurora, I dont know. But a lot of it is not fixed. The list of enterprise level engine features missing from PostgreSQL is quite long. How AWS could fix this, I dont have a clue but until it is done I dont see PostgreSQL challenging the most demanding workloads that SQL Server 2016+ and Oracle 12c can do without raising a sweat. I am sure the engine shortcomings of PostgreSQL are well known to AWS database wizards. IMHO optimizer is about as good as SQL Server 2000 was. The query parallelism is very weak compared to MS-SQL 2016+, the data compression is not there, the in-memory row/column store is not there, the on-disk column-store is not there (ignore Redshift for now), the optimizer hints are present via extension and unfriendly to use, the end-user tooling is not there (where are graphical query plans, please?). Is there profiler/extended events equivalent in PostgreSQL? The process of problem solving in MS-SQL is a dream compared to doing the same in PostgreSQL which is closer to Oracle (this is not a compliment !! Oracle tooling sucks!). From time to time my queries crash due to server-overload errors, something I have not seen SQL Server do for 10+ years.
    The PostgreSQL database cache is odd (OS file system?), it seems, which is why tuning memory and database IO is also odd/difficult, compared to MS-SQL. Much of it can be fixed by Aurora, I guess. Other things can be done by AWS contributing to PostgreSQL open source, if it makes sense to AWS…. And so on. I know PostgreSQL 10.0 has partitioning and improved parallelism etc but it still is ~5-10 years behind SQL Server 2016-2017, imho, as a database engine.
    In summary I do like the low cost of PostgreSQL and I do like their SQL/API too but the database engine back-end is not even close to MS-SQL 2016-2017, which I still think is the best RDBMS money can buy!! LOL 8^)

Comments are closed.