Database normalization is a scheme in database design that calls for the fields and tables of a database to be organized in a certain way to reduce redundancy and dependency. This is done by reducing the size of the tables by dividing them and then linking them by relationships. The reason behind this is to ensure that modifications can only be made in one table and then linked to others (Garmany et al, 2005).
One key disadvantage of normalization is the design and practice difficulties associated with it. The process requires that so many tables be created which may not be necessary if the database is to be denormalized. This also reduces the performance levels of the database since so much is needed to join the tables and coordinate them into effective working. A normalized database requires too many resources for it to perform effectively. Such include the amount of CPU time allocated for each process, amount of main memory required by the system and the I/O devices required. Having a rule as to when to stop normalization may not be a good idea but one should do away with normalization immediately the process becomes so much demanding (Garmany et al, 2005).
There are cases that call for denormalization to ensure performance issues are considered. Such include cash registers and mobile technology. It’s also of great importance to apply denormalization in cases where response may be required quickly for decision making, and where no RDBMS is used. It’s also applied in cases where business intelligence is of great interest to the users. This may, however, call for so much caution from the developers if redundancy is to be avoided and data integrity maintained (Garmany et al, 2005).
References
Garmany, J., Walker, J., & Clark, T. (2005). Logical database design principles. Boca Raton, Fla.: Auerbach Publications.