Archive

The Dulin Report

Browsable archive from the WordPress export.

Results (46)

Should today’s developers worry about AI code generators taking their jobs? Dec 11, 2022 Book review: Clojure for the Brave and True Oct 2, 2022 Stop Shakespearizing Sep 16, 2022 Using GNU Make with JavaScript and Node.js to build AWS Lambda functions Sep 4, 2022 Monolithic repository vs a monolith Aug 23, 2022 Scripting languages are tools for tying APIs together, not building complex systems Jun 8, 2022 Good developers can pick up new programming languages Jun 3, 2022 Java is no longer relevant May 29, 2022 Automation and coding tools for pet projects on the Apple hardware May 28, 2022 There is no such thing as one grand unified full-stack programming language May 27, 2022 Most terrifying professional artifact May 14, 2022 TypeScript is a productivity problem in and of itself Apr 20, 2022 Tools of the craft Dec 18, 2021 Node.js and Lambda deployment size restrictions Mar 1, 2021 What programming language to use for a brand new project? Feb 18, 2020 Using Markov Chain Generator to create Donald Trump's state of union speech Jan 20, 2019 The religion of JavaScript Nov 26, 2018 Let’s talk cloud neutrality Sep 17, 2018 TypeScript starts where JavaScript leaves off Aug 2, 2017 Node.js is a perfect enterprise application platform Jul 30, 2017 Singletons in TypeScript Jul 16, 2017 Copyright in the 21st century or how "IT Gurus of Atlanta" plagiarized my and other's articles Mar 21, 2017 Collaborative work in the cloud: what I learned teaching my daughter how to code Dec 10, 2016 Amazon Alexa is eating the retailers alive Jun 22, 2016 What can we learn from the last week's salesforce.com outage ? May 15, 2016 JEE in the cloud era: building application servers Apr 22, 2016 JavaScript as the language of the cloud Feb 20, 2016 In memory of Ed Yourdon Jan 23, 2016 Top Ten Differences Between ActiveMQ and Amazon SQS Sep 5, 2015 We Live in a Mobile Device Notification Hell Aug 22, 2015 What Every College Computer Science Freshman Should Know Aug 14, 2015 Ten Questions to Consider Before Choosing Cassandra Aug 8, 2015 The Three Myths About JavaScript Simplicity Jul 10, 2015 Book Review: "Shop Class As Soulcraft" By Matthew B. Crawford Jul 5, 2015 Big Data is not all about Hadoop May 30, 2015 Smart IT Departments Own Their Business API and Take Ownership of Data Governance May 13, 2015 Guaranteeing Delivery of Messages with AWS SQS May 9, 2015 Where AWS Elastic BeanStalk Could be Better Mar 3, 2015 Why I am Tempted to Replace Cassandra With DynamoDB Nov 13, 2014 How We Overcomplicated Web Design Oct 8, 2014 Docker can fundamentally change how you think of server deployments Aug 26, 2014 Cassandra: Lessons Learned Jun 6, 2014 Things I wish Apache Cassandra was better at Feb 12, 2014 "Hello, World!" Using Apache Thrift Feb 24, 2013 Have computers become too complicated for teaching ? Jan 1, 2013 Java, Linux and UNIX: How much things have progressed Dec 7, 2010

Why I am Tempted to Replace Cassandra With DynamoDB

November 13, 2014

I have written about Cassandra in the past. I have been using Cassandra actively for the past three years, and I am one of the big advocates of technology out there. However, as I have pointed in this blog and on my Twitter page - if you plan on scaling Cassandra out, be prepared to recruit an army of Java developers to do devops. Cassandra becomes a devops nightmare beyond 3-4 nodes. In this post I am going to try and explain why.



I started seriously considering DynamoDB for my project when I started looking into seemingly excessive inter-zone network charges. We have traced it down to our Cassandra cluster of 3 nodes and replication factor 3 that essentially tripled our network charges on a regular basis. As we started thinking through optimization scenarios and whether we need Cassandra at all for some parts of our application, DynamoDB began to make sense. We have successfully replaced a custom ActiveMQ cluster with Amazon SQS resulting in over a $1000 in monthly savings in AWS charges, and even more savings in terms of devops. Could we do the same with Cassandra ?



Cassandra devops revolves around the following areas: capacity and replication planning, consistency, scaling up and down, software upgrades, node replacements, and regular repairs.



Capacity and Replication Planning



In order to plan capacity with Cassandra one must understand the performance of a single node, performance impact of replication across more than one, and consistency when more than one node is involved. There is no document that says "If you provision this instance type on AWS and configure it in this way you will get this many operations per second."



There is a multitude of settings in the configuration files that require a graduate degree in computer science to comprehend and that are best left alone at their defaults. In other words, there is no sure way for me to say that if I want this many concurrent users doing this many concurrent operations I need this type of a cluster.



Contrasting that with DynamoDB, as far as capacity planning goes all I need to care about is what is the minimum IOPS require by my application of the particular table, what is the maximum I am willing to pay for, and how often and when I should auto scale it. Period. End of story.



Consistency



In Cassandra world consistency revolves around two factors: consistency level and replication factor. You can have fast performance and eventual consistency, or you can have slower performance and high consistency. While consistency level is specified per call, replication factor is specified at key space initialization. If you ever want to change replication factor be prepared for hours of maintenance work which becomes impossible on a live cluster once the number of nodes grows.



Again, this is an area where DynamoDB model makes much more sense. If I want consistent reads I must pay twice for IOPS. That's it. It becomes a purely financial decision.



Scaling up and down



Scaling a Cassandra cluster involves adding new nodes. Each additional node require hours of baby sitting. The process of adding a node takes a few mins, but bootstrapping can take hours. If you are using tokens you are in a bigger pickle since you have to compute just the right balance, move tokens around, and clean up (* we are using tokens since this is a legacy production cluster, and there is no safe and easy way to migrate to vnodes). Once you have added a node, it becomes a fixed cost plus extra network charges. If you ever want to scale down you have to work backwards and decommission extra nodes, which takes hours, and then you have to rebalance your cluster again if you're still using tokens.



The tokens vs vnodes situation is of particular annoyance to me. Cassandra has left many of us excluded from this feature because it does not offer clean , safe and seamless upgrade mechanism.



Going back to DynamoDB, the only thing I need to care about is IOPS. What is my minimum ? What is my maximum ? How much am I willing to pay. Period. End of story.



Software upgrades



Each time I had to upgrade Cassandra the process was the same and tedious: go to each node, upgrade the software, verify the settings have migrated (Cassandra does not offer tools to cleanly port settings from older versions), start the new binaries, run upgrade ss-tables process. It is a process that is bound to ruin a weekend for me. I am simply no longer interested.



One of the pet annoyances I have with Cassandra is how they deprecated Thrift API. Many of us have used the software for years and now have to either use deprecated API or port code to new CQL. Some of us have chosen, wisely or not, to use a Thrift library that is no longer up to date. So to use the new API we have to port the code, and an obvious question comes up - if I have to port my code to new library, do I still want to use Cassandra ?



I do not need to concern myself with software upgrades with DynamoDB. Period. End of story.



Node replacements



This is similar to scaling, as I described above. Node replacement in Cassandra world is an hours long process. No such thing with DynamoDB.



Regular repairs



If a cluster grows larger, especially in multi data center scenarios, Cassandra recommends that a regular repair process is run on each node. Again, this is a long running process that imposes significant IO workload on all nodes in the cluster. It can run for days on end, results in extra disk utilization, and requires baby sitting. On more than one occasion it has ruined a weekend for me.



DynamoDB does not require me to do anything of the sort.



So what is the moral of this story ?



From the data model perspective, DynamoDB and Cassandra are very similar. Cassandra offers more flexibility for sure, and I would much prefer Cassandra over DynamoDB. However, with no managed offering that is as simple as DynamoDB I really don't have the patience anymore.



Yes, there is Instaclustr. But that too misses the point. I have done the math - it is simply not cost effective, and requires me to do the same capacity planning exercises I am trying to avoid.



What I really am looking for is a fully managed Cassandra system that works just like DynamoDB, and only pay for capacity that I actually use, with simple API calls to scale up and down. Until that happens I see DynamoDB on my horizon.