Archive

The Dulin Report

Browsable archive from the WordPress export.

Results (45)

The future is bright Mar 30, 2025 On Amazon Prime Video’s move to a monolith May 14, 2023 One size does not fit all: neither cloud nor on-prem Apr 10, 2023 Some thoughts on the latest LastPass fiasco Mar 5, 2023 Comparing AWS SQS, SNS, and Kinesis: A Technical Breakdown for Enterprise Developers Feb 11, 2023 There is no such thing as one grand unified full-stack programming language May 27, 2022 Which AWS messaging and queuing service to use? Jan 25, 2019 Using Markov Chain Generator to create Donald Trump's state of union speech Jan 20, 2019 Adobe Creative Cloud is an example of iPad replacing a laptop Jan 3, 2019 Facebook is the new Microsoft Apr 14, 2018 Leaving Facebook and Twitter: here are the alternatives Mar 25, 2018 Rather than innovating Walmart bullies their tech vendors to leave AWS Jun 27, 2017 Architecting API ecosystems: my interview with Anthony Brovchenko of R. Culturi Jun 5, 2017 TDWI 2017, Chicago, IL: Architecting Modern Big Data API Ecosystems May 30, 2017 Online grocers have an additional burden to be reliable Jan 5, 2017 Windows 10: a confession from an iOS traitor Jan 4, 2017 What I learned from using Amazon Alexa for a month Sep 7, 2016 Why I switched to Android and Google Project Fi and why should you Aug 28, 2016 Amazon Alexa is eating the retailers alive Jun 22, 2016 In search for the mythical neutrality among top-tier public cloud providers Jun 18, 2016 What can we learn from the last week's salesforce.com outage ? May 15, 2016 Why it makes perfect sense for Dropbox to leave AWS May 7, 2016 Our civilization has a single point of failure Dec 16, 2015 IT departments must transform in the face of the cloud revolution Nov 9, 2015 Setting Up Cross-Region Replication of AWS RDS for PostgreSQL Sep 12, 2015 Top Ten Differences Between ActiveMQ and Amazon SQS Sep 5, 2015 What Every College Computer Science Freshman Should Know Aug 14, 2015 Ten Questions to Consider Before Choosing Cassandra Aug 8, 2015 Big Data Should Be Used To Make Ads More Relevant Jul 29, 2015 Book Review: "Shop Class As Soulcraft" By Matthew B. Crawford Jul 5, 2015 Attracting STEM Graduates to Traditional Enterprise IT Jul 4, 2015 Smart IT Departments Own Their Business API and Take Ownership of Data Governance May 13, 2015 Guaranteeing Delivery of Messages with AWS SQS May 9, 2015 We Need a Cloud Version of Cassandra May 7, 2015 The Clarkson School Class of 2015 Commencement speech May 5, 2015 Building a Supercomputer in AWS: Is it even worth it ? Apr 13, 2015 Ordered Sets and Logs in Cassandra vs SQL Apr 8, 2015 Microsoft and Apple Have Everything to Lose if Chromebooks Succeed Mar 31, 2015 Where AWS Elastic BeanStalk Could be Better Mar 3, 2015 Trying to Replace Cassandra with DynamoDB ? Not so fast Feb 2, 2015 Why I am Tempted to Replace Cassandra With DynamoDB Nov 13, 2014 Infrastructure in the cloud vs on-premise Aug 25, 2014 Cassandra: a key puzzle piece in a design for failure Aug 18, 2014 Cassandra: Lessons Learned Jun 6, 2014 Things I wish Apache Cassandra was better at Feb 12, 2014

Why I am Tempted to Replace Cassandra With DynamoDB

November 13, 2014

I have written about Cassandra in the past. I have been using Cassandra actively for the past three years, and I am one of the big advocates of technology out there. However, as I have pointed in this blog and on my Twitter page - if you plan on scaling Cassandra out, be prepared to recruit an army of Java developers to do devops. Cassandra becomes a devops nightmare beyond 3-4 nodes. In this post I am going to try and explain why.



I started seriously considering DynamoDB for my project when I started looking into seemingly excessive inter-zone network charges. We have traced it down to our Cassandra cluster of 3 nodes and replication factor 3 that essentially tripled our network charges on a regular basis. As we started thinking through optimization scenarios and whether we need Cassandra at all for some parts of our application, DynamoDB began to make sense. We have successfully replaced a custom ActiveMQ cluster with Amazon SQS resulting in over a $1000 in monthly savings in AWS charges, and even more savings in terms of devops. Could we do the same with Cassandra ?



Cassandra devops revolves around the following areas: capacity and replication planning, consistency, scaling up and down, software upgrades, node replacements, and regular repairs.



Capacity and Replication Planning



In order to plan capacity with Cassandra one must understand the performance of a single node, performance impact of replication across more than one, and consistency when more than one node is involved. There is no document that says "If you provision this instance type on AWS and configure it in this way you will get this many operations per second."



There is a multitude of settings in the configuration files that require a graduate degree in computer science to comprehend and that are best left alone at their defaults. In other words, there is no sure way for me to say that if I want this many concurrent users doing this many concurrent operations I need this type of a cluster.



Contrasting that with DynamoDB, as far as capacity planning goes all I need to care about is what is the minimum IOPS require by my application of the particular table, what is the maximum I am willing to pay for, and how often and when I should auto scale it. Period. End of story.



Consistency



In Cassandra world consistency revolves around two factors: consistency level and replication factor. You can have fast performance and eventual consistency, or you can have slower performance and high consistency. While consistency level is specified per call, replication factor is specified at key space initialization. If you ever want to change replication factor be prepared for hours of maintenance work which becomes impossible on a live cluster once the number of nodes grows.



Again, this is an area where DynamoDB model makes much more sense. If I want consistent reads I must pay twice for IOPS. That's it. It becomes a purely financial decision.



Scaling up and down



Scaling a Cassandra cluster involves adding new nodes. Each additional node require hours of baby sitting. The process of adding a node takes a few mins, but bootstrapping can take hours. If you are using tokens you are in a bigger pickle since you have to compute just the right balance, move tokens around, and clean up (* we are using tokens since this is a legacy production cluster, and there is no safe and easy way to migrate to vnodes). Once you have added a node, it becomes a fixed cost plus extra network charges. If you ever want to scale down you have to work backwards and decommission extra nodes, which takes hours, and then you have to rebalance your cluster again if you're still using tokens.



The tokens vs vnodes situation is of particular annoyance to me. Cassandra has left many of us excluded from this feature because it does not offer clean , safe and seamless upgrade mechanism.



Going back to DynamoDB, the only thing I need to care about is IOPS. What is my minimum ? What is my maximum ? How much am I willing to pay. Period. End of story.



Software upgrades



Each time I had to upgrade Cassandra the process was the same and tedious: go to each node, upgrade the software, verify the settings have migrated (Cassandra does not offer tools to cleanly port settings from older versions), start the new binaries, run upgrade ss-tables process. It is a process that is bound to ruin a weekend for me. I am simply no longer interested.



One of the pet annoyances I have with Cassandra is how they deprecated Thrift API. Many of us have used the software for years and now have to either use deprecated API or port code to new CQL. Some of us have chosen, wisely or not, to use a Thrift library that is no longer up to date. So to use the new API we have to port the code, and an obvious question comes up - if I have to port my code to new library, do I still want to use Cassandra ?



I do not need to concern myself with software upgrades with DynamoDB. Period. End of story.



Node replacements



This is similar to scaling, as I described above. Node replacement in Cassandra world is an hours long process. No such thing with DynamoDB.



Regular repairs



If a cluster grows larger, especially in multi data center scenarios, Cassandra recommends that a regular repair process is run on each node. Again, this is a long running process that imposes significant IO workload on all nodes in the cluster. It can run for days on end, results in extra disk utilization, and requires baby sitting. On more than one occasion it has ruined a weekend for me.



DynamoDB does not require me to do anything of the sort.



So what is the moral of this story ?



From the data model perspective, DynamoDB and Cassandra are very similar. Cassandra offers more flexibility for sure, and I would much prefer Cassandra over DynamoDB. However, with no managed offering that is as simple as DynamoDB I really don't have the patience anymore.



Yes, there is Instaclustr. But that too misses the point. I have done the math - it is simply not cost effective, and requires me to do the same capacity planning exercises I am trying to avoid.



What I really am looking for is a fully managed Cassandra system that works just like DynamoDB, and only pay for capacity that I actually use, with simple API calls to scale up and down. Until that happens I see DynamoDB on my horizon.