Software development life cycle (SDLC) is process through which software’s or systems that are designed developed and implemented (Roebuck, 2012). Proper SDLC methodologies can be used to improve quality of datasets. This can be done through by selecting the best SDLC methodology, which is achieved through learning the SDLC models, assessing needs from all stakeholders and then defining a criterion to weigh and establish the best model to implement, (Chao, 2006). This criterion must meet the threshold for the taskforce in the design, best technology in resolving the problem at hand, compatibility of the methodology to resolving the problem and meeting the requirements stated by the stakeholders.
The system requirements give way to the choice of software development approach (Roebuck, 2012). To improve quality of data sets in a database, the initial stages of SDLC should be taken seriously and precisely to yield the best database system. The first three initial stages of SDLC are the most fundamental and crucial towards proper systems development (Chao, 2006). These stages are system investigation, analysis and design. It is at this point that quality of datasets can be controlled. System investigation involves addressing all requirements which includes both user and system requirements. At this point current and future requirements of the system are stipulated so that the system can be in a position to adjust to future changes. Planning is done and feasibility determined for the system. At this point the problem to be resolved through databases is understood which enable designers know all types of data required. Through this understanding quality of datasets is enhanced.
The other phase is analysis which involves breaking the system into smaller functional modules and analyzing if the system really resolves the problem. Here project goals are drawn and stakeholders asked to state their requirements and the system requirements are also stated. From here the system is analyzed if it is capable of meeting the latter goals and requirements. The SDLC methodology is also chosen at this point on the basis of the requirements (Roebuck, 2012). Finally, the design stage whereby the requirements are implemented as well as the project goals. The functional modules are designed to working subsystems and tested to meet the requirements. The above three phases of SDLC are quite crucial and would definitely help in improving datasets in a database system.
After creation of databases they require regular maintenance to ensure they are always functional and satisfying user needs as well as the system needs according to Chao (2006). Maintenance could be rebuilding of data indexes which enable retrieval of data as well as storage which us done through indexing. Other maintenance practices include; backing up either periodically or one time backup which ensures safety of data and maintaining high levels of data integrity through checking for corrupt and duplicated datasets and security of the database. The discussed maintenance practices are the basic ones on any database system. To improve data quality data integrity must be maintained always, security by controlling access to the database as well as limiting users to the level of details they should interact with. Backing up more so ensure there is consistency in the data since it can be sourced in case there is data corruption.
In object oriented software development methodologies the most efficient methodology for planning proactive concurrency control method and lock granularity is the multi-granularity locking model. This model increases efficiency as it optimizes in provision of high concurrency and low locking overhead in accessing objects. Locking is implemented in schemas and instances. These schemas are developed individually and independently but are integrated to give the functional output. This model has the best lock modes, compatibility matrixes and locking protocols. This makes this model the best for concurrency control in object-oriented databases. This method can be used to minimize database security risks in that it implements locking protocols and lock modes that cannot be overridden. It offers enough security to datasets since access is controlled from multi-users. Once a user accesses a record in the database this model locks that record from modification from another user. This ensures data consistency and security.
Record level locking is a scenario whereby there is prevention of simultaneous access to records in a database which cause inconsistency. The verify method is used to implement record level locking. In this method, when the user in a transaction accesses a record, the record is locked from other users by verifying that the record is in use. Further verification is done to ensure that the initial user finishes with the record and saves changes to the database before the record is opened to other users. This method is used to prevent record level locking in a database in use by ensuring that no record is accessed by more than one user. A record is accessed once by one user at a specific time in a transaction. Verification constraints are created to always verify a record if it is in use before allowing accessibility hence making it effective in controlling record level locking.
References
Chao, L. (2006). Database development and management. Boca Raton, FL: Auerbach Publications.
Roebuck, K. (2012). Software Development Life Cycle (SDLC): High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors. Dayboro: Emereo Pub.
Raman Ramsin, and Richard F. Paige: Process-centered review of object oriented software development methodologies, ACM Computing Surveys (CSUR), Volume 40 Issue 1, February 2008.