See series starting at
The First Normal Form in Theory and Practice, Part I
"The real definition of Big Data?? Simple: Whatever does not fit in Excel!"
Me: What is--precisely, pls!--the threshold from small to big data? And how do the structural, manipulative and integrity aspects change over the threshold?
HM: Big data := the smallest set of data for which the sensitivity of your transfer function is minimal, and such that the cardinality of this set is too large to implement said transfer function on a single physical machine.Transfer function := though not strictly a function, as in the Lambda calculus, it is more of an operator which ingests data of any size and produces a monetizeable product or service. Sensitivity := the degree to which a perturbation on the input into a transfer function affects the result produced by said function.
Me: Ugh! I'm sure that's exactly what all the overnight data scientists, including those who invented Big Data, had in mind.--LinkedIn.com
C: As a non-CS grad coming fresh to databases, I found both the entity-relationship, and the object-oriented models confusing. Then I read Date [1] and Codd's [2] books and papers on the relational model, the one from the 1970s that is basically set and type theory applied to data, and found that to be a lot clearer and a more powerful abstraction to deal with your data model. As a non-"full time developer" it amazes me the number of "experienced" developers who are not aware of the relational model and who do not know what a foreign key is, or why referential integrity might be important.
For example, your Relational Model introduction has a discussion of various data types. But arguably, whether your integer is implemented as BIGINT or TINYINT is an implementation decision which should be separate from the model discussion (dixit Date). In other words, that attribute has a type of integer and how that integer is stored is a separate issue, and your RDBMS ought to abstract it away (as, I think, Postgres is pretty good with, and MySQL quite annoying). The beauty of the latest RDBMS developments, particularly in PostgreSQL world, is that the implementation has gotten so good that you don't need to really worry about it like you used to just a decade ago, at least in 95% of use cases.
I think one can teach SQL (and the relational model) to a non-developer in about 2 hours, because it is so declarative and intuitive. One day I'll go write that tutorial, as many clients need it sorely.
... the challenges inherent in the SQL RDBMS [sic] approach ... the constrained schema (or schema-first) approach of SQL RDBMS engines imposes semantic infidelity rather than fidelity on all applications and services that depend on this RDBMS type, solely ... SQL RDBMS engines (as per what I've outlined above) do impose a "one size fits all" constraint on DBMS driven apps and services that manifests as the "data variety issue" outlined by the "Big Data" meme.
--LinkedIn.com
You see, in Cassandra 1.x, the data model is centered around what Cassandra calls “column families”. A column family contains rows, which are identified by a row key. The row key is what you need to fetch the data from the row. The row can then have one or more columns, each of which has a name, value, and timestamp. (A value is also called a “cell”). Cassandra’s data model flexibility comes from the following facts:Compare this to the RDM.
* column names are defined per-row
* rows can be “wide” — that is, have hundreds, thousands, or even millions of columns
* columns can be sorted, and ranges of ordered columns can be selected efficiently using “slices”.
--http://blog.parsely.com/post/1928/cass/