Franz Inc., a leading supplier of Graph Database technology, with critical support from Stillwater SuperComputing Inc. and Intel, today announced it has achieved its goal of being the first to load and query a NoSQL database with a trillion RDF statements. RDF (also known as triples or quads), the cornerstone of the Semantic Web, provides a more flexible way to represent data than relational database and is at the heart of the W3C push for the Semantic Web.
Dr. Aasman point out: “NoSQL databases like Hadoop and Cassandra fail on joins. Big Enterprise, big web companies and big government intelligence organizations are all looking into big data to work with massive amounts of semi-unstructured data. They are finding that NoSQL databases are wonderful if one needs access to a single object in an ocean of billions of objects, however, they also find that the current NoSQL databases fall short if you need to run graph database operations that require many complicated joins. A typical example would be performing a social network analysis query on a large telecom call detail record database.”
One of the common data integrity issues that can happen in a database is the unintended deletion of a row. Here is how to create a DELETE trigger in PostgreSQL.
The example code below assumes you have a “customer_cus” table and a “customer_archive_cua” table with at least two fields. It is a good idea to add a timestamp field to the archive table with DEFAULT now(), so that the date of deletion is captured.
CREATE FUNCTION archive_customer() RETURNS TRIGGER AS '
INSERT INTO customer_archive_cua (name_cua, address_cua)
' LANGUAGE 'plpgsql';
CREATE TRIGGER customer_archive_on_delete AFTER DELETE ON customer_cus
FOR EACH ROW EXECUTE PROCEDURE archive_customer();
In DB2 if you need to populate a table, you need to use INSERT INTO, like in this example:
INSERT INTO new_table
SELECT col1,col2 FROM source_table
and if you need to populate query variable, you need to use SELECT INTO, like in this example:
SELECT * INTO :var1, :var2, :var3
WHERE col1= 'something';
Source: DB2 Documentation
Even though for our core systems we have migrated most databases to DB2, we still have to deal with MySQL for some side systems, such as CRM, Blog, etc. One of the MySQL features that I recently noticed is that dropping indexes on sizable tables takes forever. That is apparently due to a convoluted indexing process in MySQL. The workaround was found here: MySQL General Discussion List
It goes as follows:
1) create table T1 like T;
This creates an empty table T1 with indexes ndx1,ndx2,ndx3 and ndx4.
2) alter table T1 drop index ndx3;
This drops index ndx3 on the empty T1, which should be instantaneous.
3) insert into T1 select * from T;
This will populate table T and load all three(3) indexes for T1 in one pass.
4) drop table table T;
5) alter table T1 rename to T;
An alternative ugly hack is to start the drop index, kill the database server. That will crash the table on which the drop index was. The crashed table can be recovered by running mysiamchk.
Many drivers have ways to escape SQL strings to make sure no malicious activity is going on. Usually you can use a function in the driver that can take care of that. However, if all you need is too escape a single quote, you can also use the double quote method:
reliability is key to gain customers' loyalty
reliability is key to gain customers'' loyalty