• Subscribe

Choosing the right scientific software

Is the scientific method being undermined? Image courtesy Colin Kinner, Flickr, CC BY 2.0.

Last week, the results of a survey were published in the journal Science showing that many non-scientific factors often come into play when researchers select software for modeling and other purposes. Could researchers' inability to weigh up the relative pros and cons of the software alternatives available to them based on their scientific merits be undermining the scientific method?

The survey was carried out by a UK-based team comprising members from Microsoft Research, Cambridge; The University of Oxford; and The Centre for Ecology and Hydrology, Penicuik. The team asked 400 members of the species distribution modeling community to answer a range of questions on how they select the software they use to help them conduct their research. Among their findings, the team found that 7% of respondents chose to use a particular piece of software on the grounds that "the developer is well-respected". Also, they found that 9% and 18% of respondents reported "personal recommendation" and "recommendation from a close colleague" as the reasons which lay behind their choices. Meanwhile, just 8% of respondents answered that they had validated software against other methods as a primary reason for their choice.

On the basis of their findings the team reccommends that universities endeavour to "produce scientists capable of instantiating science in code such that other scientists are able to peer-review code as they would other aspects of science" (79% of the respondents expressed a desire to learn additional software and programming skills). The authors also argue that their findings have important implications for the world of scientific publishing, writing: "Scientific software code needs to be not only published and made available but also peer-reviewed. That this is not part of the current peer-review model means that papers of which science is primarily software-based (i.e., most modeling papers) are not currently fully or properly peer-reviewed. It also means peer-reviewers need to be able to peer-review the code (i.e., be highly computationally literate)."

Consequently, the authors conclude that scientific considerations are often given only minimal weighting when it comes to researchers selecting the software they need to help them carry out their research. Thus, instead of software being adopted purely on the basis of it enabling the user to ask and answer new scientific questions or the ability of others to reproduce the science, communication channels, time, and social systems also play a major role. Consequently, subjective perceptions, opinion leaders, and early adopters can often make a big difference in terms of what software is and isn't used to carry out research. As such, the authors write: "Scientific considerations of the consequences of adoption generally occur late in the process, if at all."

The research team's article can be read in full on the Science website, here.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2023 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.