Normalization is a technique mostly applied when the organization of tables in the database is necessary. One of its main objectives is to eliminate redundancy in the tables and ensure correctness of the anomalies in the insertion, updating and deleting of data from the tables. The process also ensures that the data dependency shared throughout a database makes sense and is logical. Once the process of normalization occurs, then it becomes very easy to update and delete records without necessarily worrying that any data might be lost (Kanade, Gopal, & Kanade, 2014). The process improves the design of the database and ensures optimization of all resources. In particular, it is possible to normalize a table normally up to the third normal form.
In the first normalization process, a first normal form is achievable where all the repeating data is eliminated by ensuring that the values in a database are atomic which is to say they are unique and do not repeat themselves at any point. As a result, this ensures that fetching of data occurs by only one unique set from a unique row that it represents a primary or unique key. The first normal form makes rows unique to themselves but increase column data duplicity and consequently increasing data redundancy. Additionally, the second normalization involves elimination of the redundancy through analysis of all the columns by relating them to primary keys in the unique rows. The establishment of all relationships occurs by linking all primary keys with the necessary foreign keys to every table. A table achieves the second normalization from when all the non-key attributes are dependent on the primary keys. In cases where the primary key is a multivalued attribute, then the columns need to depend on the partial part of the key to ensuring uniqueness and complete eradication of redundancy.
The third type of normalization involves normalization of all the relationships in that not all columns should depend on attribute but the primary keys. This stage seeks to eliminate the various columns that are not necessarily dependent on any of the primary keys. If there is any value that is not dependent on any primary key, then it eventually moved to another table to ensure representation. At this stage, transitive functional dependency involves the eradication of all tables with keys transit to each other and all non-key attributes made dependent to the various tables. One of the examples highly relevant to this concept is having a database that has student identification, teacher identification, and class identification. Consequently, this would offer an excellent normalization case because the tables fetch the same data but are also unique to themselves (Coronel & Morris, 2016). Therefore, concepts of redundancy and data duplication will be compulsory to eliminate and all relevant relationships established.
Denormalization is a technique applied to a normalized database to improve its performance. In particular, it involves adding redundant data to where the database administrators think it is necessary. Extra attributes merge the already existing tables; new tables can be created or even mounting new instances on a normalized table. The main goal of this process is reducing the running time of particular queries drastically by ensuring data is more accessible to the queries or generating summarized records of the same. Some of the situations when denormalization is necessary are when maintaining history, query performance improvement, increasing the speed of reporting, and computation of commonly required values upfront to ensure consistency and efficiency.
Finally, when it comes to business, there would be a major impact on normalization and denormalization in that the use of words in database and drafting of business rules might bring a lot of confusion (Hoberman, 2015). Therefore, it is advisable to avoid the use of elevated language to guarantee the understanding of the same concepts used in business rules and database concepts.
References
Coronel, C., & Morris, S. (2016). Database systems: design, implementation, & management. Cengage Learning.
Hoberman, S. (2015). Data Modeling Made Simple: A Practical Guide for Business and IT Professionals. Technics Publications.
Kanade, A., Gopal, A., & Kanade, S. (2014, February). A study of normalization and embedding in MongoDB. In Advance Computing Conference (IACC), 2014 IEEE International (pp. 416-421). IEEE.