Table 2.
Ranking approaches for source code.
| Tool/ Approach | Strength | Weakness |
|---|---|---|
| Google Code Search and Ohloh [41,42] | Results are ranked based on textual similarity. | Uses only one feature which is textual similarity. |
| Sourcerer [29] | Uses the basic notation of CodeRank, which only extracts structural information. | Only focus on structural information of source code. |
| PARSEWeb [4] | Uses the frequency and length of MIS (method-invocation sequences) to rank the final result. | Uses MIS feature during the ranking phase. |
| Exemplar [43] | Uses three ranking schemes WOS (word occurrences schema), DCS (dataflow connection schema), and RAS (relevant API calls schema) to rank the application. | This tool ranks the applications, not the source code snippets. |
| Semantic Code Search [44] | The comparable code snippets that follow the call sequences extrapolated from code snippets determine the ranking. | Uses a call sequence, which is the only feature used for the ranking code snippets. |
| Pattern-based Approach [45] | This approach considers popularity to rank the working code examples. | Popularity is the only feature that contributed to the final ranking. |
| QualBoa [37] | This tool incorporates functional and quality attributes | Ranking components based on the functional score |