Monday, 7 July 2014

Integrated Approach

Main article: Database machine
In the 1970s and 1980s attempts were made to
build database systems with integrated
hardware and software. The underlying
philosophy was that such integration would
provide higher performance at lower cost.
Examples were IBM System/38 , the early
offering of Teradata, and the Britton Lee, Inc.
database machine.
Another approach to hardware support for
database management was ICL's CAFS
accelerator, a hardware disk controller with
programmable search capabilities. In the long
term, these efforts were generally unsuccessful
because specialized database machines could
not keep pace with the rapid development and
progress of general-purpose computers. Thus
most database systems nowadays are software
systems running on general-purpose hardware,
using general-purpose computer data storage.
However this idea is still pursued for certain
applications by some companies like Netezza
and Oracle (Exadata ).
Late 1970s, SQL DBMS
IBM started working on a prototype system
loosely based on Codd's concepts as System R
in the early 1970s. The first version was ready
in 1974/5, and work then started on multi-table
systems in which the data could be split so that
all of the data for a record (some of which is
optional) did not have to be stored in a single
large "chunk". Subsequent multi-user versions
were tested by customers in 1978 and 1979, by
which time a standardized query language –
SQL[ citation needed ] – had been added. Codd's
ideas were establishing themselves as both
workable and superior to CODASYL, pushing
IBM to develop a true production version of
System R, known as SQL/DS , and, later,
Database 2 (DB2).
Larry Ellison 's Oracle started from a different
chain, based on IBM's papers on System R, and
beat IBM to market when the first version was
released in 1978. [ citation needed ]
Stonebraker went on to apply the lessons from
INGRES to develop a new database, Postgres,
which is now known as PostgreSQL.
PostgreSQL is often used for global mission
critical applications (the .org and .info domain
name registries use it as their primary data
store, as do many large companies and financial
institutions).
In Sweden, Codd's paper was also read and
Mimer SQL was developed from the mid-1970s
at Uppsala University . In 1984, this project was
consolidated into an independent enterprise. In
the early 1980s, Mimer introduced transaction
handling for high robustness in applications, an
idea that was subsequently implemented on
most other DBMSs.
Another data model, the entity-relationship
model , emerged in 1976 and gained popularity
for database design as it emphasized a more
familiar description than the earlier relational
model. Later on, entity-relationship constructs
were retrofitted as a data modeling construct
for the relational model, and the difference
between the two have become
irrelevant. [ citation needed ]
1980s, on the desktop
The 1980s ushered in the age of desktop
computing . The new computers empowered
their users with spreadsheets like Lotus 1,2,3
and database software like dBASE. The dBASE
product was lightweight and easy for any
computer user to understand out of the box. C.
Wayne Ratliff the creator of dBASE stated:
“dBASE was different from programs like BASIC,
C, FORTRAN, and COBOL in that a lot of the dirty
work had already been done. The data
manipulation is done by dBASE instead of by
the user, so the user can concentrate on what
he is doing, rather than having to mess with the
dirty details of opening, reading, and closing
files, and managing space allocation.“ [14]
dBASE was one of the top selling software titles
in the 1980s and early 1990s.
1980s, object-oriented
The 1980s, along with a rise in object-oriented
programming , saw a growth in how data in
various databases were handled. Programmers
and designers began to treat the data in their
databases as objects. That is to say that if a
person's data were in a database, that person's
attributes, such as their address, phone
number, and age, were now considered to
belong to that person instead of being
extraneous data. This allows for relations
between data to be relations to objects and
their attributes and not to individual fields. [15]
The term "object-relational impedance
mismatch" described the inconvenience of
translating between programmed objects and
database tables. Object databases and object-
relational databases attempt to solve this
problem by providing an object-oriented
language (sometimes as extensions to SQL) that
programmers can use as alternative to purely
relational SQL. On the programming side,
libraries known as object-relational mappings
(ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
Main articles: NoSQL and NewSQL
The next generation of post-relational databases
in the 2000s became known as NoSQL
databases, including fast key-value stores and
document-oriented databases. XML databases
are a type of structured document-oriented
database that allows querying based on XML
document attributes. XML databases are mostly
used in enterprise database management , where
XML is being used as the machine-to-machine
data interoperability standard. XML databases
are mostly commercial software systems, that
include Clusterpoint, [16] MarkLogic [17] and
Oracle XML DB . [18]
NoSQL databases are often very fast, do not
require fixed table schemas, avoid join
operations by storing denormalized data, and
are designed to scale horizontally. The most
popular NoSQL systems include MongoDB ,
Couchbase, Riak , memcached , Redis, CouchDB,
Hazelcast , Apache Cassandra and HBase ,[19]
which are all open-source software products.
In recent years there was a high demand for
massively distributed databases with high
partition tolerance but according to the CAP
theorem it is impossible for a distributed
system to simultaneously provide consistency,
availability and partition tolerance guarantees. A
distributed system can satisfy any two of these
guarantees at the same time, but not all three.
For that reason many NoSQL databases are
using what is called eventual consistency to
provide both availability and partition tolerance
guarantees with a maximum level of data
consistency.
NewSQL is a class of modern relational
databases that aims to provide the same
scalable performance of NoSQL systems for
online transaction processing (read-write)
workloads while still using SQL and maintaining
the ACID guarantees of a traditional database
system. Such databases include Clustrix ,
EnterpriseDB, NuoDB [20] and VoltDB.

No comments:

Post a Comment