Skip to main content
PLOS One logoLink to PLOS One
. 2022 Apr 1;17(4):e0264420. doi: 10.1371/journal.pone.0264420

Statistical analysis of software development models by six-pointed star framework

Intakhab Alam 1, Nadeem Sarwar 1,*, Iram Noreen 1
Editor: M Usman Ashraf2
PMCID: PMC8975137  PMID: 35363771

Abstract

Software Development Process Model (SDPM) develops software according to the needs of the client within the defined budget and time. There are many software development models such as waterfall, Iterative, Rapid Application Development (RAD), Spiral, Agile, Z, and AZ model. Each development model follows a series of steps to develop a product. Each model has its strengths and weaknesses. In this study, we have investigated different software development process models using the six-pointed star framework. Six-point star is a framework of project management industry standards maintained by Project Management Body of Knowledge (PMBOK). A survey is designed to evaluate the performance of well-known software process models in the context of factors defined by the six-point star framework. The survey is conducted with experienced users of the software industry. The statistical analysis and comparison of results obtained by the survey are further used to examine the effectiveness of each model for the development of high-quality software concerning lightweight and heavyweight methodologies for small, medium and large scale projects. After exploring the results of all factors of the six-pointed star model, we conclude that lightweight methodology easily handles small-scale projects. The heavyweight methodology is suitable for medium and large-scale projects, whereas the AZ model, which is one of the latest models, works efficiently with both small-scale and large-scale categories of projects.

Introduction

Software development life cycle or SDLC is a process to develop software according to the step of series for designing, coding, testing, and finalizing the product [1]. There are many software development models according to the user as well as developer requirements [2]. The existing software development models, e.g., waterfall, iterative, agile, spiral, RAD, Z, and AZ model have their strengths and weaknesses. Some models provide the best results for short-term projects while others are beneficial for long-term projects. In some models, proper interaction between developer and client is highly emphasized such as agile whereas in other models, there is no proper developer and client interaction or it is limited to preliminary stages at managerial level. To achieve the desired output client interaction matters a lot in the development life cycle [3]. There are the following main phases of the software development life cycle as shown in Fig 1.

Fig 1. Software development life cycle.

Fig 1

The first phase of the software development life cycle is the planning phase in which the senior team members interact with the client to collect the required information and wish list. Planning related to quality assurance and risk minimization is also identified in this phase. After completing the paling phase, the next phase is to define the requirements which are done with the help of software requirement specification (SRS). SRS document consists of all the product requirement which is necessary for the development of the product. Based on SRS, the developer proposes more than one design which are documented in document design specification (DDS). All the experts review this DDS and finalize a single design feasible for the product. The next phase is the development phase in which the developer uses specific high-level languages like C, C++, PHP, or Java to implement the desired features of the software product. The development phase is followed by the testing phase in which the complete developed product is tested to fix the errors and bugs. The last phase is the maintenance phase which consists of customer feedback. There are various software development models adopted by software teams according to their project needs [4]. These models are categorized as lightweight and heavyweight model based on multiple factors demonstrated in Table 1. Table 1 provides a comparison of both types of models [5].

Table 1. Comparison of heavyweight and lightweight methodologies [5].

Lightweight Heavyweight
Less documentation Heavy documentation (SRS)
Small team size Large team size
People-oriented Tool oriented
Nonpredictive approach Predictive approach
Not critical Extremely critical
Examples: Agile, AZ Examples: Waterfall, Iterative, V Model

In this research, there is a comparison of both types of models concerning these models by applying PMBOK framework known as the six-pointed star model. Five models are included in this study which are categorized in lightweight and heavyweight methodologies in Table 1. In the literature review section, there is a detailed discussion of all these models.

In the past, the overall success of the development model has been determined by three factors (time, cost, scope) but in this research, we have adopted one of the methodologies of PMBOK known as the six-pointed star model. This methodology judges the success of development models on six factors (time, cost, scope, risk, resource, quality) which helps to improve the quality of the product.

The rest of the paper is organized as follows. In literature review section, there is a brief discussion of different software development models and recent trends for new software development models. In the research methodology section, there is a brief discussion of PMBOK six-pointed star framework. In the data collection section, there is a discussion of data collection and arrangement. In the result section, there is a brief discussion of results obtained by adopted methodology and in the last section, we conclude the study.

Literature review

The main purpose of software engineering is to make reliable and high-quality software for users. Software development companies develop different types of software as per user requirements for the ease of users. For the development of software, the developer needs some model which is called the software development model. These models are applied according to international standards ISO/IEC 12270 [6]. The first model of SDLC was a fundamental part of software engineering because it provided a structure for different software development activities. Initially, speculation about the software development life cycles was that it’s a simple code but with the passage of time programming became complicated, consequently arising the need to upgrade the structure of software development [2].

The fundamental building blocks like the structured development method are available for the software development life cycle but the main challenge is how to improve the productivity and quality of the product. With the inclusion of supporting processes and iterative life cycle; the productivity and quality of product improved. IBM developed two of the main models which are called VIDOC and COMMAND. Based on the IBM ADP model manage the project quality and project management. After that, many models were developed by different companies [2].

Later, the importance of time-to-market of software to find out unclear and changing requirements has highly increased. In the case of any changes, the iterative model focuses mainly on the coding and testing phases. A new model was published called Boehm’s Spiral Model which includes requirement analysis in the iterations. After Boehm’s Spiral Model a new development model called rapid application development was published by martin [7]. The V model also called the Verification and Validation model was introduced with the simultaneous parallel testing feature during development phase. The major improvements in the software development life cycle are agile methodology and parallel plan-driven techniques. Nowadays, a great variety of software development models exist to develop the quality and reliable product. These methodologies are categorized as the lightweight methodology and the heavyweight methodology, each having their own pros and cons. The lightweight methodology is people-oriented while the heavyweight methodology is process-oriented [5].

A new software development model called the A-Z model is reported to be more efficient than all earlier models and according to its author, the A-Z model covers the drawbacks of all earlier models [8]. This model consists of three phases of communication, development, and product release. This is an intermediate methodology that works like a heavyweight as well as a lightweight method. The summary of related research work is described in Table 2.

Table 2. Literature review summary.

ID Reference Year Main Idea Limitations
1 Aslam et al. [9] 2019 Improve design pattern with PRIC technique. PRIC technique does not focus on quality improvement of the software.
2 Azeem Akbar et al. [10] 2018 Improve requirement change management in GSD with Six-pointed star methodology. The barrier of the RCM process is not discussed in GSD.
3 Mohit Kumar et al.[11] 2018 Importance of SDLC to develop high-quality software. This research focuses on only two factors cost and quality and ignores another factor that is also necessary for the success of the project.
4 Azeem Akbar et al. [12] 2018 Different methodologies evaluate based on the six-pointed star model. The six-pointed star model does not evaluate the development models based on GSD.
5 J. Yu [4] 2018 A detailed discussion of the characteristics of different software development models. Not discuss the Software development models based on 4GT.
6 D. Galin [13] 2018 Software quality concept concerning different software development models. Mainly focus of this research is related to quality and ignore other factors that’s a big limitation of this research.
7 Azeem Akbar et al. [8] 2017 A new technique is known as the AZ model for the development of high-quality software development with usability engineering. A big limitation in the new AZ model is that it’s a limited agile model that’s why agile is a more preferable model to AZ.
8 H. S. Modi et al. [14] 2017 Comparative analysis of three different models with three different techniques. The limitation of this research is that not all models discuss different techniques.
9 P. S. Helode et al. [15] 2017 Different software development techniques to develop software with its pros and cons. The big limitation of this research is that there is no discussion of hybrid methodologies.
10 R. Kneuper [2] 2017 Sixty years history of software development life cycles and evaluation of new software development models. Sixty years of development model explore but not using specific factors like (scope, budget, risk, resource, quality).
11 R. Arora et al. [16] 2016 Choose the right software development model according to user needs. The limitation of this research is that not all models discuss because some models might be simulated using some tools.
12 M. A. Rather et al. [17] 2016 Different software process model discuses and explain that when the new model came into existence. The limitation of this research is that different models explore without any specific perspective (quality, Cost) etc.
13 M. Mateen et al. [18] 2016 The AZ model fulfills the drawbacks of the previous model’s new methodology to develop software with quality. A big limitation of the AZ model is that it’s not a completely customer-friendly model.
14 I. H. Sarker et al. [19] 2015 Survey of different development process models to choose the desired model according to requirement. Not discuss the hybrid software development technique.
15 Alshamrani et al. [20] 2015 Comparative analysis of three software development models and discuss their pros and cons. There is a big limitation that only three models compare and analyze their strengths and weaknesses.

Hence, most of the studies mentioned above are focused on two or three process models irrespective of the model’s category and comparison factors. Similarly other studies are mere survey papers which are not based on any survey or case study. Similarly, they are not evaluated using PMBOK framework of standards. Different software development models are used according to the requirements of clients to develop high-quality software [11]. Every development model consists of a series of steps to receive the desired output [17]. Different organizations follow different models for the development of projects. Some organizations use more than one model. The most widely used development models in the software industry are described as follows.

Waterfall model

The first sequential process model is called the waterfall model which was developed by Royce in 1970. This model belongs to the heavyweight model category because there is a need for proper documentation. This model consists of steps of series to develop a well-defined product [21]. There are the following steps of the waterfall model communication, planning, designing, construction, and deployment. It is a very simple and easy-to-use model, but the main disadvantage of this model is that there is no back-tracking possible in this model. One phase is complete then the next phase starts that’s why this model is not preferable for long-term projects [14]. The waterfall is suitable for where requirements are clear, technology is defined, and no confusing requirements exist. This model is easy to manage. All stages of this model are clearly defined. However, this model is not feasible for complex, dynamic, and high-risk projects because it does not accommodate changing requirements [10].

Iterative model

This model consists of four phases (requirement, analysis, design, coding) and the product is delivered in the form of iterations. This model also belongs to the heavyweight model category because this model tool oriented and there is a need for proper documentation. If something is missing it can be accommodated in the next iteration and so on [22]. The iterative model is suitable where the requirement evolves with time, domain new, and project lengthy. In this model, the product delivery rate is faster, and prioritize requirements develop earlier. But the total cost of the process is high [15,23].

Agile model

In agile methodology, there is no need for proper planning but there is clarity of future work. The backbone of agile methodology is close customer interaction, a friendly environment of client and developer, and support for changing requirements [24]. The agile model relates to lightweight model category. The agile model is suitable where the least documentation is required, and the geographical location is also the same. In this model, the daily conversation of the developer and client improves the satisfaction of the customer and easily accommodates changes. Its product delivery rate is fast. Due to no documentation [13] the project details are not very clear for future enhancement and scalability. This model is suitable for small to medium-scale projects.

V model

This model is like the waterfall model, also known as the verification and validation model. Each phase completes before the next phase starts and parallel testing is the main feature of the V model [19]. The V model is suitable for small and medium-scale projects also V model is preferred when expert-level technical team and resources are available. In this model, parallel testing ensures to clear all bugs and defects at the early stages. However, this model is not much flexible for changing requirements because this model relates to heavyweight model category. If any change is required, then the SRS document must be updated accordingly.

AZ model

AZ-model is one the latest model that works on the lightweight and heavyweight methods [18] but it is mostly suitable for lightweight model category. This model consists of three phases (customer interaction, development, product delivery). In this model, there is the concept of usability testing and time-boxing which can enhance the quality of the developed product [8]. The AZ model is suitable for small as well as large-scale projects. This model is also suitable where limiting work is in progress. This model provides high-quality software. The main feature of this model is usability testing. This model is people as well as process-oriented. The focus of this model is client satisfaction but the client interaction in this model is limited. Table 3 summarizes the pros and cons of the aforementioned software development model.

Table 3. Pros and cons of software development life cycle.

Models Pro Cons
Waterfall ◾ Easy to manage.
◾ Simple and easy to use.
◾ Not flexible.
◾ Not suitable for complex projects.
Iterative ◾ Results were obtained earlier.
◾ Risk managed easier.
◾ Good for large-scale projects.
◾ Costly model.
◾ More resources are required.
Agile ◾ Easy to manage.
◾ Flexible model.
◾ No planning is required.
◾ Not suitable for complex projects.
◾ More risk due to lack of documentation.
V Model ◾ Simple and easy to use.
Good for small-scale projects.
◾ No good for complex projects.
◾ High risk.
AZ ◾ Good for light and heavyweight methodologies.
◾ Usability testing feature improves the performance of developed software.
◾ The concept of timeboxing improves efficiency.
◾ Customer interaction is limited.

Research methodology

The project management body of knowledge (PMBOK) is a framework of standards, conventions, processes, best practices, terminologies, and guidelines that are accepted as project management industry standards. The PMBOK refers to the five process steps of project management: initiating, planning, executing, controlling, and closing [8]. One of the models of PMBOK is called the Six-pointed star framework. Traditionally the software success is evaluated using three factors (time, cost, scope). However, now overall success can be measured with the help of a model by the project management body of knowledge called the six-pointed star framework. This model consists of six factors (scope, budget, time, resource, risk, quality) in star formation as shown in Fig 2 [25].

Fig 2. Six-pointed star framework of project management body of knowledge [25].

Fig 2

This model is divided into two triangles. The first triangle consist of (scope, schedule, budget) which are used as input/ output factors and the second triangle consists of (risk, resource, quality) known as process factor [26] shown in Figs 3 and 4. We applied this method to different software development models and performed a comparative statistical analysis to investigate that which model is more efficient [22].

Fig 3. First triangle of PMBOK six-pointed star framework [25].

Fig 3

Fig 4. Second triangle of PMBOK six-pointed star framework [25].

Fig 4

Data collection

Data is collected with the help of a survey form. This survey form consists of three phases. The first phase collects the personal information of the respondent; the second phase comprehends the general information about the organization and the last phase encompasses questioner designed based on the factors of the six-pointed star method. Moreover, a clause of an ethics statement is also included about informed consent to make sure the respondent that their information will be kept confidential and will be used only for research purposes. This survey started from July 2020 to November 2020 and twenty-six organizations participated in this survey resulting in a collection of 31 responses. Some organizations fill in more than one survey form. Responses were collected based on the Likert scale (strongly agree, agree, neutral, disagree, strongly disagree) [25]. If any respondent follows more than one model, then they submit two survey forms, i.e., one for each model. The results were obtained in numeric format. We summarized these results and applied statistical graphical techniques by using well-known statistical tools on the collected data. The data relating to the survey (questionnaire forms, respondent response forms) is available at the Google drive link [27]. According to general information, our respondents belong to different software organizations having experience of 3–5 years with both lightweight and heavyweight types of models in both small and/or large organizations. Most of the respondents informed us that the two factors (budget and quality) are important when they are adopting a methodology for developing software. The respondents of organizations give their opinion (with the help of the Likert scale) about different development models. Respondents give their opinion about the selected model in lightweight as well as heavyweight methodology. The distribution of software development models is shown in Table 4 in a supporting file. In Table 4, there is a distribution of software development models which is selected by developers of different software houses. When respondents fill the survey form, they select a specific development model that is applicable in their organization. According to the survey mostly respondents prefer the waterfall model. The agile and iterative model is preferable to the waterfall model. In the last V and AZ model selected by respondents.

Table 4. Distribution of software development models.

Model Selected
Waterfall 11
Iterative 10
V 7
Agile 10
AZ 7

Ethical concerns

To prevent the association of any ethical concerns to this research project, certain measures were adopted, i.e., the research participants were asked about their consent before participating in this research study. Moreover, no personal details of the research participants were collected other than their email IDs, and the research participants were informed about this act. All in all, the researcher practiced a high level of morality and ethics to meet and support the confidence of the research participants. Moreover, a post-graduate research project evaluation and ethics committee consisting of three senior Ph.D. members have also approved the ethical review form of the study.

During the data collection, respondent shares their opinion about different software development methods [10]. Most respondent suggests a lightweight method is best for small and medium scale projects and a heavyweight method best for large-scale projects. The small medium and large scale projects are categorized depending on some important factors as shown in Table 5.

Table 5. Categorization of small medium and large scale projects.

Factors Small Scale Projects Medium Scale Projects Large Scale Projects
Duration Less than six months Six to twelve months More than twelve months
Budget Less than 100,00 $ 100,00$ - 500,00$ Greater than 500,00$
Team Members Fewer than 5 people 5–20 people Greater than 20 people
Integration Minimal with other business units Moderate with other business units Significant with other business units
Impact Fewer than 25 end-users 25–250 end-users More than 250 end-users

The questioner form (implemented on both methodologies) is based on six factors of the project management body of knowledge is shown in Table 6. Table 7 shows the numeric value of the Likert scale score for the collection of respondents’ responses.

Table 6. Questioner form based on six-pointed star methodology.

Factors Questions
Schedule Questioner related to schedule:
• Gratifying project requirements.
• Task manged according to schedule.
• Awareness of project status by the project team.
Scope Questioner related to scope:
• Overall decisive scope of the project.
• Team members have clarity in scope.
Budget Questioner related to budget:
• Project accomplished in the decided budget.
• Return Good or not.
Risk Questioner related to risk:
• Management of risk.
• Meet business aspiration.
Resource Questioner related to resource:
• Availability of resources.
• Utilization of resources.
Quality Questioner related to risk:
• Statisfaction of client.
• Successfully Accomplish.

Table 7. Five points Likert scale for the collection of respondent response.

Response Score
Strongly Disagree 0
Disagree 1
Neutral 2
Agree 3
Strongly Agree 4

Results

The literature reveals that the six-pointed star methodology only applies to the AZ model that enables the enhancement of the quality of software [9]. In this paper, we apply the six-pointed star methodology to lightweight and heavyweight software development models and perform a statistical analysis and compare the results of different models to check the efficiency of the development process. The results based on the factors of selected methodology and these results are calculate with the help of Eq 1.

<Display_Math>Score = nn 1

Where n is a number of responses.

Next section provides the comparative graphical representation of both methodologies based on the six-pointed star model. Results comparisons of both methodologies are discussed in the following subsections.

Lightweight vs lightweight

Agile vs AZ

Agile is also one of the best processes to use an adaptive approach where there is no need for a detailed documentation process. Customer interaction is one of the best features of an agile model. In the AZ model, customer interaction is limited only in the first phase which is the weak point of this model as compared to the Agile model. Comparison results of the agile model with the AZ model using the first triangle of PMBOK methodology factors (scope, budget, schedule) are shown in Fig 5. The agile model is more preferable in the scope of budget and schedule factors in (small, medium, and large) scales projects.

Fig 5. Graphical representation of agile vs AZ (first triangle) results.

Fig 5

Whereas, Fig 6 shows the results of the second triangle of this methodology factors (risk, resource, quality). The risk is handled in a better way in the AZ model as compared to the agile model in all sizes of projects. Maximum resources are used in the agile model in small-scale projects. In medium-scale projects, both models use equal resources but in large-scale projects, the AZ model is more preferable to achieve the desired product. Both models delivered good quality products but according to results, the AZ model is preferable to the agile model with a quality perspective because the main focus of the AZ model is to improve the quality of the product.

Fig 6. Graphical representation of agile vs AZ (second triangle) results.

Fig 6

Heavyweight vs heavyweight

Waterfall vs iterative

The first process model is a waterfall model. It is a very simple model with some drawbacks. PMBOK methodology applies to these two models. Comparison results of the waterfall model with the iterative model using the first triangle of PMBOK methodology factors (scope, budget, schedule) are shown in Fig 7. In scope factor, the iterative model is more reliable than the waterfall model in small- and large-scale projects. For medium-scale projects, both models have the same results. According to the results of a budget factor the project accomplished within the decided budget is good in the iterative model than the waterfall model in small and medium scale projects but in large scale projects waterfall model is more reliable than the iterative model. In schedule factor, the waterfall model is more suitable for small-scale and medium-scale projects but for large-scale projects iterative model is best.

Fig 7. Graphical representation of waterfall vs iterative (first triangle) results.

Fig 7

Whereas Fig 8 shows the results of the second triangle of PMBOK methodology factors (risk, resource, quality). Risk is well managed in the iterative model in small medium and large-scale projects. Maximum resources are used in the iterative model in all types of projects. The most important factor is the quality factor. The iterative model provides a better-quality product than the waterfall model in medium and large-scale projects.

Fig 8. Graphical representation of waterfall vs iterative (second triangle) results.

Fig 8

Waterfall vs V

Comparison results of waterfall model with V model using the first triangle of PMBOK methodology factors (scope, budget, schedule) are shown in Fig 9. In scope factor, the V model is more reliable in the small-scale project than the waterfall but in medium and large-scale projects waterfall model is preferable to the V model. The ratio of the project accomplished within the decided budget is higher in the waterfall model as compared to the V model in all types of projects. Scheduling of V model in small scale projects is better than waterfall model but in medium and large-scale projects waterfall model more preferable than V model,

Fig 9. Graphical representation of waterfall vs V (first triangle) results.

Fig 9

Whereas, Fig 10 shows the results of the second triangle of this methodology factors (risk, resource, quality). In the V model risk can be minimized easily due to testing of every module in all sizes of projects. Maximum resource used in V model as compared to the waterfall model in all sizes of projects. The product developed by V-model is more efficient than the waterfall model in small medium and large size of projects.

Fig 10. Graphical representation of waterfall vs V (second triangle) results.

Fig 10

Iterative vs V

Comparison results of the Iterative model with the V model using the first triangle of PMBOK methodology factors (scope, budget, schedule) are shown in Fig 11. The scope factor of the iterative model is clearer than the V model in small and medium scale projects but in large-scale projects, the V model is more suitable than the iterative model. The iterative model performs all tasks within the decided budgets as compared to the V model in all sizes of projects. The scheduling factor gives the best result in the V model as compared to the Iterative model.

Fig 11. Graphical representation of iterative vs V (first triangle) results.

Fig 11

Whereas, Fig 12 shows the results of the second triangle of this methodology factors (risk, resource, quality). V model handles the risk in a well-managed way in all sizes of projects. Maximum resources are used in the V model to accomplish the task. When we compare the quality factor of both models, the V model is more efficient than the iterative model for all sizes of projects as compared to the iterative model.

Fig 12. Graphical representation of iterative vs V (second triangle) results.

Fig 12

The complete results summary based on a survey is demonstrated in Fig 13. In this plot, there is a comparative analysis of all development models based on the methodology of PMBOK which tells us that how all factors of the PMBOK model works in lightweight as well as heavyweight methodology. According to the result, the agile model is the most preferable in both methodologies. After the agile model, respondents focus on the AZ model which is also a modern technique for developing high-quality software. After the AZ model, the third priority of respondents is the V model. After the V model, Iterative and waterfall models are selected by respondents.

Fig 13. Result summary of different development models using PMBOK methodology.

Fig 13

Conclusion

The analyses were conducted to determine the best methodology according to the project size and the requirements of an organization. Software quality mainly depends on the selected software development model. In this research, there is a comparison of different software development models based on the PMBOK model to ensure the quality of projects. Based on the factors of the six-pointed star model the summarized results confirmed that almost all the factors of methodology are in favor of lightweight methodologies for small-scale projects. For medium-scale projects, both methodologies are almost similar. The concluded results of both methodologies for large-scale projects show that heavyweight methodologies are much more satisfactory for all factors of the six-pointed star model. The respondents prefer the agile model because agile is a customer-friendly model. The latest model is known as the AZ model which is suitable for small-scale and large-scale projects but customer interaction is also limited in this model which is the main disadvantage of this model but this model covers the drawback of all earlier models so this model is more reliable. There is a need to develop a new software model that is client-friendly also easy to use for developers with minimizing risk within decided budget and quality. In the future, we explore how intelligent software development models are based on a data-driven approach.

Supporting information

S1 Dataset

(XLSX)

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Booch G., “The History of Software Engineering,” IEEE Softw., vol. 35, no. 5, pp. 108–114, 2018, doi: 10.1109/MS.2018.3571234 [DOI] [Google Scholar]
  • 2.Kneuper R., “Sixty years of software development life cycle models,” IEEE Ann. Hist. Comput., vol. 39, no. 3, pp. 41–54, 2017, doi: 10.1109/MAHC.2017.3481346 [DOI] [Google Scholar]
  • 3.tri noviana Wulandari, lydia ersta Kusumaningtyas, and Irmade O., “A study of SDLC to develop well engineered software,” J. Audi, vol. 3, no. 1, pp. 63–72, 2018. [Google Scholar]
  • 4.Yu J., “Research Process on Software Development Model,” IOP Conf. Ser. Mater. Sci. Eng., vol. 394, no. 3, 2018, doi: 10.1088/1757-899X/394/3/032045 [DOI] [Google Scholar]
  • 5.Charvat J., “Heavyweight vs. lightweight methodologies: Key strategies for development,” pp. 383–388, 2002, [Online]. Available: http://www.techrepublic.com/article/heavyweight-vs-lightweight-methodologies-key-strategies-for-development/. [Google Scholar]
  • 6.Al-Qutaish R. E., Al-Sarayreh K. T., and Al-Sarayreh K., "Software Process and Product ISO Standards: A Comprehensive Survey Software Tool for Non-Functional Requirements Using International Standards View project Software Engineering Principles-Fundamentals View project Software Process and Product ISO Standard," European Journal of Scientific Research, vol. 19, no. 2, pp. 289–303, 2008. [Google Scholar]
  • 7.Boehm B. W., Defense T. R. W. and Group S., "A Spiral Model of Software Development and Enhancement," Computer, vol. 21, no. 5, pp. 61–72, 1988. [Google Scholar]
  • 8.Wideman R. M., "Comparing PRINCE2® with PMBoK®," Pm4Succes, pp. 1–9, 2002. [Google Scholar]
  • 9.Aslam T., Rana T., Batool M., Naheed A., and Andaleeb A., “Quality-based software architectural decision making,” 2019 Int. Conf. Commun. Technol. ComTech 2019, no. ComTech, pp. 114–119, 2019, doi: 10.1109/COMTECH.2019.8737836 [DOI] [Google Scholar]
  • 10.Akbar M. A., Nasrullah M. Shafiq J. Ahmad M. Mateen, and Riaz M. T., “AZ-Model of software requirements change management in global software development,” 2018 Int. Conf. Comput. Electron. Electr. Eng. ICE Cube 2018, pp. 1–6, 2019, doi: 10.1109/ICECUBE.2018.8610964 [DOI] [Google Scholar]
  • 11.Kumar M., “A Comparative Study of Universally Accepted SDLC Models for Software Development,” Int. J. Sci. Res. Sci. Technol., vol. 4, no. 5, p. 31, 2018, [Online]. Available: www.ijsrst.com. [Google Scholar]
  • 12.Akbar M. A. et al. , “Statistical Analysis of the Effects of Heavyweight and Lightweight Methodologies on the Six-Pointed Star Model,” IEEE Access, vol. 6, pp. 8066–8079, 2018, doi: 10.1109/ACCESS.2018.2805702 [DOI] [Google Scholar]
  • 13.Galin D., “From SDLC to Agile—Processes and Quality Assurance Activities,” Softw. Qual. Concepts Pract., pp. 635–666, 2018, doi: 10.1002/9781119134527.app4 [DOI] [Google Scholar]
  • 14.Modi H. S., Singh N. K., and Chauhan H. P., “Comprehensive Analysis of Software Development Life Cycle Models,” Int. Res. J. Eng. Technol., vol. 4, no. 6, pp. 117–122, 2017, [Online]. Available: https://irjet.net/archives/V4/i6/IRJET-V4I618.pdf. [Google Scholar]
  • 15.Helode P. S., Walse Dr. K. H., and Karande M.U., “An Online Secure Social Networking with Friend Discovery System,” Int. J. Innov. Res. Comput. Commun. Eng., vol. 5, no. 4, pp. 8198–8205, 2017, doi: 10.15680/IJIRCCE.2017 [DOI] [Google Scholar]
  • 16.Arora R. and Arora N., “Analysis of SDLC Models,” Int. J. Curr. Eng. Technol., vol. 6, no. 1, pp. 2277–4106, 2016, [Online]. Available: http://inpressco.com/category/ijcet. [Google Scholar]
  • 17.Rather M. A. and Bhatnagar V., “A comparative study of sdlc model,” no. August, 2016. [Google Scholar]
  • 18.Mateen A., Azeem M., and Shafiq M., “AZ Model for Software Development,” Int. J. Comput. Appl., vol. 151, no. 6, pp. 33–36, 2016, doi: 10.5120/ijca2016911701 [DOI] [Google Scholar]
  • 19.Sarker I. H., Faruque F., Hossen U., and Rahman A., “A survey of software development process models in software engineering,” Int. J. Softw. Eng. its Appl., vol. 9, no. 11, pp. 55–70, 2015, doi: 10.14257/ijseia.2015.9.11.05 [DOI] [Google Scholar]
  • 20.Alshamrani A. and Bahattab A., “A Comparison Between Three SDLC Models Waterfall Model, Spiral Model, and Incremental/Iterative Model,” IJCSI Int. J. Comput. Sci. Issues, vol. 12, no. 1, pp. 106–111, 2015, [Online]. Available: https://www.academia.edu/10793943/A_Comparison_Between_Three_SDLC_Models_Waterfall_Model_Spiral_Model_and_Incremental_Iterative_Model. [Google Scholar]
  • 21.Mujumdar A., Masiwal G., and Chawan P. M., “Analysis of various Software Process Models,” vol. 2, no. 3, pp. 2015–2021, 2021, [Online]. Available: http://www.researchgate.net/profile/Pramila_Chawan/publication/267427007_Analysis_of_various_Software_Process_Models/links/54f0aa150cf2f9e34efd0776.pdf. [Google Scholar]
  • 22.Mohammed N., Munassar A., and Govardhan A., “A Comparison Between Five Models Of Software Engineering,” Int. J. Comput. Sci. Issues, vol. 7, no. 5, pp. 94–101, 2010. [Google Scholar]
  • 23.Gowtham V G., Manoj Y., Pooventhiran G., Praveen A.,Shivaram R., and Kathiresan Kathiresan A“Evolutionary Models in Software Engineering,” Int. J. New Technol. Res., vol. 3, no. 5, p. 263294, 2017. [Google Scholar]
  • 24.Meso P. and Jain R., “Agile software development: Adaptive systems principles and rest practices,” Inf. Syst. Manag., vol. 23, no. 3, pp. 19–30, 2006, doi: 10.1201/1078.10580530/46108.23.3.20060601/93704.3 [DOI] [Google Scholar]
  • 25.Akbar M. A. et al. , “Improving the quality of software development process by introducing a new methodology-Az-model,” IEEE Access, vol. 6, pp. 4811–4823, 2017, doi: 10.1109/ACCESS.2017.2787981 [DOI] [Google Scholar]
  • 26.Jamali G. and Oveisi M., “A Study on Project Management Based on PMBOK and PRINCE2,” Mod. Appl. Sci., vol. 10, no. 6, p. 142, 2016, doi: 10.5539/mas.v10n6p142 [DOI] [Google Scholar]
  • 27.Survay data available at: https://drive.google.com/drive/folders/1xcVD3xVYm0eZMDlipQ2E8HMGHgkUGs23?usp=sharing.

Decision Letter 0

M Usman Ashraf

17 May 2021

PONE-D-21-05435

Statistical Analysis of Software Development Models and Six-Pointed Star Methodology

PLOS ONE

Dear Dr. SARWAR,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 01 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

M. Usman Ashraf, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service.  

Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.  If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free.

Upon resubmission, please provide the following:

  • The name of the colleague or the details of the professional service that edited your manuscript

  • A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file)

  • A clean copy of the edited manuscript (uploaded as the new *manuscript* file)

3. Thank you for including your ethics statement:  "Ethical concerns mentioned in the article.

"To prevent the association of any ethical concerns to this particular research project,

certain measures were taken i.e. the research participants were asked about their

consent before participating in this research study. Also, no personal details of the

research participants were collected other than their email IDs, and the research

participants were informed about this act. All in all, the researcher practiced a high

level of morality and ethicality to attain and maintain the confidence of the research

participants."".   

Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

4. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

6. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables should be uploaded as separate "supporting information" files.

7. Please include captions for *all* your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors compare several software development models by administering a survey to several software development companies. The survey asks respondents to indicate their preferred model and then proceeds to ask them several questions about their experiences in small, medium, and large software development projects in both lightweight and heavy methodologies. Based on this the authors analyze the pros/cons of the reported software models.

While the objective of comparing several software models is a good one, the manuscript is not. It is written like a presentation that has been converted into a word document. Most of it consists almost entirely of bulleted lists with very little details and references to supporting evidence. There is a related works table but it is presented as is, without any elaboration. Figures have extremely short captions without descriptions of their visual elements. Proper statistical analyses are non existent and the language is poor.

Regarding the survey, it asks a participant their opinion on two methodologies: lightweight and heavy. What if the participant is only experienced in one of these methodologies? Also, what if the participant indicates that they use more than one model in their organization? these situations are not explained in the text.

As it is, this work is a definite rejection. To improve the manuscript, the authors should consider:

- Fixing spelling and grammar mistakes.

- Unification of the pros and cons of various methodologies, as reported by literature, so that they could be summarized into a nice table and discussed afterwords.

- Adding detailed discussion of related works.

- Replacing bulleted lists with text, including extra details on the points mentioned in those lists (along with proper citations).

- Describing the visual elements of the figures properly.

- Performing and reporting proper statistical analysis (e.g., hypothesis testing, Bayesian approaches, etc).

- Reporting the results about the background of survey respondents (section A of the survey).

- Adding error bars to the figures.

- Ditching 3D visualizations (Fig 9) and using simpler more intuitive 2D histograms.

Also, the authors should consider objective measures such as the organization's ability to deliver projects on time, budget, and scope. Objective measures are nice because they are not affected by subjective biases inherent in survey-based methods. Also, it might be useful to understand what kind of products the companies develop, as different methodologies might work best for different product types, not just (large, small, medium scale) products.

Reviewer #2: The submitted manuscript deals with the problem of different software development process models comparison to the six-pointed star methodology using a statistical analysis. Initially, the authors have considered six different models with arguments of their advantages and disadvantages. Then, the data of the survey with 31 responses is used for statistical analysis. Indeed, the main subject of the manuscript seems to be interesting. However, several failings must be indicated.

1. The manuscript gives an impression of a short presentation with the separate slides. The description is weak and declarative. All text of the model comparison can be summarized in a one table but then there is no text of the manuscript at all. Just an indication is “see the table”.

2. The illustrations in Figure 2,3,4 are too obvious. Moreover, they are exactly the same like in the paper by Akbar et. al (Akbar, M. A., Sang, J., Khan, A. A., Fazal, E. A., Nasrullah, Shafiq, M., Hussain, S., Hu, H., Elahi, M., and Xiang, H. (2017) Improving the quality of software development process by introducing a new methodology-Az-model. IEEE Access 6, 4811-4823.) The authors make a reference number 19 to this paper but there is no indication on it in the Figure legend. Thus, the question of originality of the drawings is immediately arisen. Additionally, even in an original publication by Akbar et. al a Six-pointed star model (Fig.5) is taken from free sources: wikipedia.org/wiki/File: TripleConstraint.jpg.

3. There is no description of the statistical analysis made at all. The authors just represent several very unclear diagrams where the results have been plotted tightly with nearly invisible box-whisker output. The figure legends are very short with no descriptions.

4. The conclusions are unconvinced. The main reason for this observation is a poor depiction of method and results. The postulate (page 19):

“Some models suitable for small and some suitable for large-scale projects. Some models have poor and some have good client interaction.”

is common and uninformative.

5. A reader can make an opinion about the manuscript like a presentation of business result but not an analytic material. To improve this impression the authors have to re-write methods and results and represent the outcomes clearly.

6. The form of the manuscript edition is poor as well. For example, the reference 13 has no appropriate authors’ list. Additionally, the text must be edited by English native speakers.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Apr 1;17(4):e0264420. doi: 10.1371/journal.pone.0264420.r002

Author response to Decision Letter 0


1 Jul 2021

Reviewer 1 concerns:

Concern 1

Reviewer #1: The authors compare several software development models by administering a survey to several software development companies. The survey asks respondents to indicate their preferred model and then proceeds to ask them several questions about their experiences in small, medium, and large software development projects in both lightweight and heavy methodologies. Based on this the authors analyze the pros/cons of the reported software models.

Author response: The authors have examined the different software development models and have designed a questionnaire based on PMBOK methodology known as the six-pointed star model. Then the author have acquired the opinion of experts of different methodologies of small medium and large-scale projects. Based on the opinion of experts author have finalized the results. Pros and cons are also discussed in a table as discussed in reviewer’s concern No 5.

Author action: No updates required in the manuscript.

Concern 2

While the objective of comparing several software models is a good one, the manuscript is not. It is written like a presentation that has been converted into a word document. Most of it consists almost entirely of bulleted lists with very few details and references to supporting evidence. There is a related works table but it is presented as is, without any elaboration. Figures have extremely short captions without descriptions of their visual elements. Proper statistical analyses are nonexistent and the language is poor.

Author response: The manuscript is updated by removing the bullet list, tables and figures are properly captioned.

Author action: We have updated the manuscript by explaining the concept on page 4 as required by the reviewer.

Concern 3

Regarding the survey, it asks a participant their opinion on two methodologies: lightweight and heavy. What if the participant is only experienced in one of these methodologies? Also, what if the participant indicates that they use more than one model in their organization? these situations are not explained in the text.

Author response: Responses were collected from experienced members of different organizations who know the software methodologies briefly. And if there is an organization that uses more than one model, respondents fill two forms.

Author action: We have updated the manuscript by explaining the concept on pages 4 and 8 respectively as required by the reviewer.

Concern 4

- Fixing spelling and grammar mistakes.

Author response: All the grammar and spelling mistakes are fixed by using Grammarly. Mansucript is thoroughly proff read by an english languga expert.

Author action: We have updated the manuscript as required.

Concern 5

- Unification of the pros and cons of various methodologies, as reported by literature, so that they could be summarized into a nice table and discussed afterward.

- Adding detailed discussion of related works.

- Replacing bulleted lists with text, including extra details on the points mentioned in those lists (along with proper citations).

Author response: The pros and cons table is included in the manuscript and bullets are replaced with the text form.

Author action: We have updated the manuscript section software development models on pages 4 to 6 with desired modifications suggested by honorable reviewer.

Concern 6

- Describing the visual elements of the figures properly.

- Performing and reporting proper statistical analysis (e.g., hypothesis testing, Bayesian approaches, etc).

- Reporting the results about the background of survey respondents (section A of the survey).

- Adding error bars to the figures.

- Ditching 3D visualizations (Fig 9) and using simpler more intuitive 2D histograms.

Author response: The visual elements of figures are properly described in revised manuscript and the background of the respondents is also mentioned in the data collection section. Error bar is also provided in the figures. Fig are updated using simpler or more intuitive 2d histogram.

Author action: We have updated the manuscript data collection section, result section, by explaining the concept on page 9 as suggested by reviewer.

Concern 7

Also, the authors should consider objective measures such as the organization's ability to deliver projects on time, budget, and scope. Objective measures are nice because they are not affected by subjective biases inherent in survey-based methods. Also, it might be useful to understand what kind of products the companies develop, as different methodologies might work best for different product types, not just (large, small, medium scale) products.

Author response: Different methodologies are used for different products of categories such as small-medium and large scale projects. Characteristics of different methodologies show which methodology is for small-medium or large-scale projects (page 5).

Author action: Characteristics of all methodologies are already discussed in the manuscript therefore, this concern requires not update.

Reviewer 2 concerns:

Concern 1

1. The manuscript gives an impression of a short presentation with separate slides. The description is weak and declarative. All text of the model comparison can be summarized in one table but then there is no text of the manuscript at all. Just an indication is “see the table”.

Author response: In the manuscript table presents the summarized form of the research work. However, we have added the text for table description to improve the manuscript as suggested by reviewer.

Author action: We have updated the manuscript methodology section by explaining the concepts on page 6.

Concern 2

2. The illustrations in Figures 2,3,4 are too obvious. Moreover, they are the same as in the paper by Akbar et. al (Akbar, M. A., Sang, J., Khan, A. A., Fazal, E. A., Nasrullah, Shafiq, M., Hussain, S., Hu, H., Elahi, M., and Xiang, H. (2017) Improving the quality of software development process by introducing a new methodology-Az-model. IEEE Access 6, 4811-4823.) The authors make reference number 19 to this paper but there is no indication of it in the Figure legend. Thus, the question of the originality of the drawings immediately arises. Additionally, even in an original publication by Akbar et. al a Six-pointed star model (Fig.5) is taken from free sources: wikipedia.org/wiki/File: TripleConstraint.jpg.

Author response: Figures 2,3,4 are the methodology figures named six-pointed star methodology. These figures are adapted from Akbar et. al (Akbar, M. A., Sang, J., Khan, A. A., Fazal, E. A., Nasrullah, Shafiq, M., Hussain, S., Hu, H., Elahi, M., and Xiang, H. (2017) Improving the quality of software development process by introducing a new methodology-Az-model. IEEE Access 6, 4811-4823.) paper. We have properly cited these figures in the manuscript.

Author action: We have updated the manuscript methodology section by explaining the concept on page 8 and figures are also cited.

Concern 3

There is no description of the statistical analysis made at all. The authors just represent several very unclear diagrams where the results have been plotted tightly with nearly invisible box-whisker output. The figure legends are very short with no descriptions.

Author response: We have clearly stated all diagrams in the result sections. The figure legends are visible and described clearly.

Author action: We have update the manuscript result section by explaining the concept on page 10.

Concern 4

4. The conclusions are unconvinced. The main reason for this observation is a poor depiction of the method and results. The postulate (page 19):

“Some models suitable for small and some suitable for large-scale projects. Some models have poor and some have good client interaction.” is common and uninformative.

Author response: We have updated the results and conclusion section accordingly.

Author action: We have updated the manuscript conclusion section by explaining the concept on page 21.

Concern 5

5. A reader can make an opinion about the manuscript like a presentation of the business result but not analytic material. To improve this impression the authors have to re-write methods and results and represent the outcomes.

Author response: We have revised the result sections, diagrams, and outcomes or improved discussion.

Author action: We have updated the manuscript result section as well as the conclusion on pages 10 and 21.

Concern 6

The form of the manuscript edition is poor as well. For example, reference 13 has no appropriate authors’ list. Additionally, the text must be edited by English native speakers.

Author response: We have added references with the help of the Mendeley tool.

Author actions: We update the manuscript reference 13 on page 22and similarly all references are checked for missing information entries using MS word insert citation.

Once again, authors are grateful to all reviewers for their valuable time and suggestions to improve the mansucript.

Attachment

Submitted filename: Responses to reviewers.docx

Decision Letter 1

M Usman Ashraf

26 Jul 2021

PONE-D-21-05435R1

Statistical Analysis of Software Development Models and Six-Pointed Star Methodology

PLOS ONE

Dear Dr. SARWAR,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

M. Usman Ashraf, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. The background section is still poor. First, it starts at too high a level by trying to convince us why computers are important. Second, papers are discussed briefly and simply placed in a table format, one sentence each, which is not enough to understand how this work is different from other papers which have attempted to compare software models and methodologies.

2. There is almost zero discussion of each results Figure. The manuscript does not tell the readers what the results imply in each plot. Because of this and the lack of y-axis labels in the plots, it is unclear how the authors reached their conclusions regarding the efficacy of different models.

3. Does the distinction between "methodology" and "model" imply that the two are orthogonal? For example, can you have a heavyweight agile combination? or a lightweight waterfall combination? it certainly appears like the two are NOT orthogonal. If that's the case, what's the point of this distinction?

4. It seems unreasonable to me to expect that participants in small organizations can have strong experience with large-scale projects, and vice-versa. How do the authors control for this in their statistical analysis? and how do they control for the respondent's programming experience?

5. Results:

- What is the distribution of software models that the respondents have selected (i.e., how many selected waterfall, AZ, agile, etc)?

- Plots are still missing y-axis labels.

- Error bars appear to be identical in length for same colored bars in each plot. Why is that?

6. To summarize their results, I suggest using a star diagram similar to Figure 2, to show where each model's strengths lie.

7. Language:

The authors should re-examine their manuscript's language. There are clear typos, grammatical errors, and weird phraseology everywhere. A few examples:

- "After completing the paling phase ..." (page 3) -> After completing the planning phase ...

- " the iterative model adopts which is mainly focus on the coding and testing phase" (page 5) -> the iterative model focuses mainly on coding and testing phases

- "Now in the modern year," (page 8)

- "Questioner related to schedule" (Table 4) -> "Questions related to schedule"

8. Software development models are still unnecessarily listed in a bulleted list in page 6.

9. Data availability: the authors include summary data (Table 6) in the supplementary file but it is unclear what the numbers in the table mean. Proper data availability for this manuscript means that the authors must include all survey results (taking care to anonymize the authors and organizations of course).

While the manuscript has improved in several aspects, it still has not addressed the poor writing and fundamental lack of (i) a proper analysis of related literature, (ii) rigorous statistical analysis, and (iii) convincing discussion and conclusions.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Apr 1;17(4):e0264420. doi: 10.1371/journal.pone.0264420.r004

Author response to Decision Letter 1


24 Aug 2021

Concern 1:: The background section is still poor. First, it starts at too high a level by trying to convince us why computers are important. Second, papers are discussed briefly and simply placed in a table format, one sentence each, which is not enough to understand how this work is different from other papers which have attempted to compare software models and methodologies.

Author response: Back ground description in literature review section is revised according to reviewer’s suggestion.

Author actions: Back ground description in literature review section is revised according to reviewer’s suggestion. Bulleted lists and irrelevant headings are removed. First para about computers introduction is also removed. Remaining paragraphs are also revised/rephrased and proof read. Difference of previous study with this study is also explained on page 6 after Table 2.

Concern 2: There is almost zero discussion of each results Figure. The manuscript does not tell the readers what the results imply in each plot. Because of this and the lack of y-axis labels in the plots, it is unclear how the authors reached their conclusions regarding the efficacy of different models.

Author response: We have updated the results figures discussion section and also plot y-axis labels properly. The resulting figure clearly shows that the comparison of small medium and large-scale projects concerning adopted methodology.

Author actions: We update the results figure section with a detailed discussion also update y-axis labels.

Concern 3: Does the distinction between "methodology" and "model" imply that the two are orthogonal? For example, can you have a heavyweight agile combination? or a lightweight waterfall combination? it certainly appears like the two are NOT orthogonal. If that's the case, what's the point of this distinction?

Author response: A model provides an environment to implement a framework. A methodology is a tool that helps to perform some action. In our research work, our adopted methodology factors judge different software development models based on heavyweight as well as lightweight by factor wise for example budget factor (agile vs waterfall) budget analyze of both models in light as well as heavyweight in a sequential manner.

Author actions: We compare two models with respect to Small Medium and large scale projects also evaluate these models in lightweight as well as heavyweight methodologies.

Concern 4: It seems unreasonable to me to expect that participants in small organizations can have strong experience with large-scale projects, and vice-versa. How do the authors control for this in their statistical analysis? and how do they control for the respondent's programming experience?

Author response: In the data collection method, we explain our respondents have experienced people even in organizations small or large. Respondent programming experience control in questionnaire form.

Author actions: we update the data collection method on page 10.

Concern 5: What is the distribution of software models that the respondents have selected (i.e., how many selected waterfalls, AZ, agile, etc)?

- Plots are still missing y-axis labels.

- Error bars appear to be identical in length for same-colored bars in each plot. Why is that?

Author response: We add the distribution of software models that the respondents have selected in the form of a table in the supporting file also plot the y-axis labels.

Author actions: We update the data collection method section on page 10 and also update the result section figure on page 22.

Concern 6: To summarize their results, I suggest using a star diagram similar to Figure 2, to show where each model's strengths lie.

Author response: We use a star diagram summarize the result results adequately.

Author actions: we update the resulting figure on page 22.

Concern 7: The authors should re-examine their manuscript's language. There are clear typos, grammatical errors, and weird phraseology everywhere. A few examples:

- "After completing the paling phase ..." (page 3) -> After completing the planning phase ...

- " the iterative model adopts which is mainly focus on the coding and testing phase" (page 5) -> the iterative model focuses mainly on coding and testing phases

- "Now in the modern year," (page 8)

- "Questioner related to schedule" (Table 4) -> "Questions related to schedule"

Author response: Manuscript is revised to improve writing and typos.

Author actions: Manuscript is revised to improve writing and typos.

Concern 8: Software development models are still unnecessarily listed in a bulleted list in page 6.

Author response: Bullets are removed as advised.

Author actions: Bullets are removed as advised.

Concern 9: Data availability: the authors include summary data (Table 6) in the supplementary file but it is unclear what the numbers in the table mean. Proper data availability for this manuscript means that the authors must include all survey results (taking care to anonymize the authors and organizations of course).

Author response: We have update the required table descriptions.

Author actions: we update the description about table 6 and as well as table 7 on page 12.

Once again, authors are grateful to all reviewers for their valuable time and suggestions to improve the manuscript.

Attachment

Submitted filename: Responses to reviewers v 1.2.docx

Decision Letter 2

M Usman Ashraf

20 Oct 2021

PONE-D-21-05435R2Statistical Analysis of Software Development Models and Six-Pointed Star MethodologyPLOS ONE

Dear Dr. SARWAR,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Dec 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

M. Usman Ashraf, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear Editor,

While I appreciate the authors’ efforts in addressing my earlier concerns, I still find that the manuscript suffers from fundamental issues that have not been addressed by the authors and thus, cannot recommend it for acceptance.

First, the authors make an artificial distinction between a “methodology” and a “model”. The way they define it, one can have a “heavyweight agile” combination or a “lightweight waterfall” combination. None of these combinations make any sense; an agile methodology is lightweight by definition. This problematic distinction exists in the survey used to gather the programmers’ feedback. Specifically, the survey asks respondents to indicate the software methodology followed in their company (agile, waterfall, etc), then proceeds to ask them a series of questions, each requiring them to differentiate between a “lightweight” and “heavyweight” methodology. As stated previously, this survey structure means that one can have heavy agile or light waterfall methods, which is nonsensical. The authors’ abbreviated response to this concern was not satisfactory. It would have been better to group the software development models into the two categories instead. For example, the agile model belongs to the “lightweight” category, while the waterfall model belongs to the “heavyweight” category.

Second, the survey assumes that all programmers have experience with both “heavyweight” and “lightweight” methodologies. Again, the authors response to this concern was not satisfactory. They stated that all respondents had 3-5 years’ experience. But that doesn’t mean that all of them have experience with small- and large-scale projects or lightweight and heavyweight methodologies. For example, programmers might be highly experienced but work within small development shops which might favor lighter methods, and vice versa. The survey only asks how many years’ experience the respondent has, not the size of projects they have been involved in the past. This casts serious doubt on the results of the survey as it is highly likely that respondents are only responding with what they perceive to be the advantages and disadvantages of lightweight and heavyweight methodologies, rather than what the respondents experienced when following those methodologies.

Third, it seems to me that some aspects of the 6-star evaluation methodology can be better served by using objective measures rather than surveys. Scheduling, budget, and resources can be measured objectively in terms of time, money, and capital spent. Quality can be measured in terms of unit-test coverage, number of bugs, post-release issues, cyclomatic complexity, etc.

Fourth, the questions relating to scope and risk in the survey are very vague. For example, in the scope section there is this question: “Does the project have a decisive scope by adopting a lightweight methodology”. What does decisive scope mean? How can the respondent know? Similarly, what is meant by meeting business aspirations under the risk section?

While I would have liked to see the revised Figures in revision 2, I couldn’t download them and they weren’t included with the PDF. But those revisions are unlikely to change the fundamental un-addressed problems above.

In short, the methodology and survey adopted by the paper suffers from fundamental problems that make it impossible for me to recommend it for acceptance. To be clear, I don't think my decision will change without properly re-collecting the data using better instruments.

A minor note: the authors made available all the original survey responses on Google drive which is great. But they are not anonymized! the full names of participants and companies are visible.

Reviewer #2: Even though this manuscript has a specific subject of research I think that the presented results can be used as an applied approach to analysis. There are no other special comments to authors.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Apr 1;17(4):e0264420. doi: 10.1371/journal.pone.0264420.r006

Author response to Decision Letter 2


13 Dec 2021

Dear reviewers,

The authors owe gratitude to the referees for their valuable comments and suggestions which have helped us to improve the presentation of this paper. Every desirable and necessary changes have been incorporated in the revised version. Please find below the rebuttal of our paper.

Concern 1:

First, the authors make an artificial distinction between a “methodology” and a “model”. The way they define it, one can have a “heavyweight agile” combination or a “lightweight waterfall” combination. None of these combinations make any sense; an agile methodology is lightweight. This problematic distinction exists in the survey used to gather the programmers’ feedback. Specifically, the survey asks respondents to indicate the software methodology followed in their company (agile, waterfall, etc), then proceeds to ask them a series of questions, each requiring them to differentiate between a “lightweight” and “heavyweight” methodology. As stated previously, this survey structure means that one can have heavy agile or light waterfall methods, which is nonsensical. The authors’ abbreviated response to this concern was not satisfactory. It would have been better to group the software development models into the two categories instead. For example, the agile model belongs to the “lightweight” category, while the waterfall model belongs to the “heavyweight” category.

Author response: A model is templete while methodology is study of method that is used in a field.In the updated manuscript the lightweight and heavyweight methodologies are discussed separately. The comparative analysis of lightweight vs lightweight and heavyweight vs heavyweight are discussed separately. The waterfall model belongs to heavyweight methodology so its comparison is done only with the models that belong to heavyweight methodology. The lightweight methodology (agile, AZ) comparison is done within lightweight boundaries. For the categorization of small medium and large scale projects, there is table 7 added to the manuscript.

Author actions: We have updated the manuscript.

Concern 2:

Second, the survey assumes that all programmers have experience with both “heavyweight” and “lightweight” methodologies. Again, the authors response to this concern was not satisfactory. They stated that all respondents had 3-5 years’ experience. But that doesn’t mean that all of them have experience with small- and large-scale projects or lightweight and heavyweight methodologies. For example, programmers might be highly experienced but work within small development shops which might favour lighter methods, and vice versa. The survey only asks how many years’ experience the respondent has, not the size of projects they have been involved in the past. This casts serious doubt on the results of the survey as it is highly likely that respondents are only responding with what they perceive to be the advantages and disadvantages of lightweight and heavyweight methodologies, rather than what the respondents experienced when following those methodologies.

Author response: For the categorization of small medium and large scale projects, there is table 7 added to the manuscript. In the updated survey questionnaire form the experience section is divided into further subsections (lightweight and heavyweight or both) respondent give their opinion about experiences in lightweight and heavyweight methodologies. The questions relating to scope and risk in the survey are revised. Survey Questioner updated according to reviewer recommendation

Author actions: We have updated the manuscript.

Concern 3:

Third, it seems to me that some aspects of the 6-star evaluation methodology can be better served by using objective measures rather than surveys. Scheduling, budget, and resources can be measured objectively in terms of time, money, and capital spent. Quality can be measured in terms of unit-test coverage, number of bugs, post-release issues, cyclomatic complexity, etc.

Author response: In 6-star evaluation methodology (Scheduling, budget, and resources) are measured in term of time money and capital but quality measured in term of customer satisfaction and success of project.

Author actions: We have updated the manuscript.

Concern 4:

Fourth, the questions relating to scope and risk in the survey are very vague. For example, in the scope section there is this question: “Does the project have a decisive scope by adopting a lightweight methodology”. What does decisive scope mean? How can the respondent know? Similarly, what is meant by meeting business aspirations under the risk section?

Author response: The questions relating to scope and risk in the survey are revised. Survey Questioner updated according to reviewer recommendation

Author actions: We have updated the manuscript.

Once again, authors are grateful to all reviewers for their valuable time and suggestions to improve the manuscript.

Attachment

Submitted filename: Responses to reviewers v 1.3.docx

Decision Letter 3

M Usman Ashraf

11 Feb 2022

Statistical Analysis of Software Development Models by Six-Pointed Star Framework

PONE-D-21-05435R3

Dear Dr. SARWAR,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

M. Usman Ashraf, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Authors have addressed all the comments as per the directions. Thank you for considering the constructive comments given by the reviewers. 

Acceptance letter

M Usman Ashraf

11 Mar 2022

PONE-D-21-05435R3

Statistical Analysis of Software Development Models by Six-Pointed Star Framework

Dear Dr. Sarwar:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. M. Usman Ashraf

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset

    (XLSX)

    Attachment

    Submitted filename: Responses to reviewers.docx

    Attachment

    Submitted filename: Responses to reviewers v 1.2.docx

    Attachment

    Submitted filename: Responses to reviewers v 1.3.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES