Your reads are
"Consistent" means that for this particular Read/Write level combo, all nodes will "see" the same data. "Eventually consistent" means
that you might get old data from some nodes and new data for others until the data has been replicated across all devices. The idea is that this way you can
increase read/write speeds and improve tolerance against dead nodes.
You can survive the loss of
without impacting the application.
How many nodes can go down without application noticing? This is a lower bound - in large clusters, you could lose more nodes and if they happen to be handling different parts of the keyspace, then you wouldn't notice either.
You can survive the loss of
without data loss.
How many nodes can go down without physically losing data? This is a lower bound - in large clusters, you could lose more nodes and if they happen to be handling different parts of the keyspace, then you wouldn't notice either.
You are really reading from
every time.
The more nodes you read from, more network traffic ensues, and the bigger the latencies involved. Cassandra read operation won't return until at least this many nodes have responded with some data value.
You are really writing to
every time.
The more nodes you write to, more network traffic ensues, and the bigger the latencies involved. Cassandra write operation won't return until at least this many nodes have acknowledged receiving the data.
Each node holds
of your data.
The bigger your cluster is, the more the data gets distributed across your nodes. If you are using the RandomPartitioner, or are very
good at distributing your keys when you use OrderedPartitioner, this is how much data each of your nodes has to handle. This is also how much
of your keyspace becomes inaccessible for each node that you lose beyond the safe limit, above.