Quantcast
Channel: bjoern.brembs.blog » data

A wonderful example of Open Science

$
0
0
own data

In our new lab here in Regensburg, we are currently re-establishing the method of confocal microscopy. To start with, we used the fly lines which show a defect in the temporal structure of their spontaneous behavior (a project of one of our graduate students). this fly line uses two transgenic constructs (c105 and c232) to drive expression in specified neurons in the fly brain, i.e., the merged expression patterns of c105 and c232. In this case, the transgenes drive expression of green fluorescent protein (GFP). Around 7pm on the evening right after the dissections and the confocal imaging, I reconstructed the image stacks as a 3D video of the expression pattern in the brain and uploaded it to my YouTube channel:

Only an hour or so after I posted the video, Douglas Armstrong sent me an email:

Are you sure c232 is in there? most of its neurons are missing, looks like c105 though

Cheers,

D

http://flytrap.inf.ed.ac.uk/html/enhancer/c105/
http://www.fly-trap.org/html/enhancer/c232/

And sure enough, upon closer inspection and comparison with the data displayed in the two links Douglas sent along, it seemed as if only the c105 expression pattern is visible. So as we are establishing the technique here, we need to pay extra attention to threshold effects, preparation and the imaging settings at the two different confocal microscopes we have here. In addition, we need to image flies with the individual drivers, to make sure nothing is wrong with the fly strain that should contain both drivers.

All of these things are important and we probably would have not paid special attention to them as we thought the double driver line was simply a good line to start optimizing the method.

In other words, if I had never published the data online right after we got them, I probably might have spent quite some time not being aware of potential problems with this fly strain.

Thanks so much, Douglas!


SHARE: Library-based publishing becoming a reality?

$
0
0
science politics

The recently released development draft for SHared Access Research Ecosystem (SHARE), authored by the Association of American Universities (AAU), the Association of Public and Land-grant Universities (APLU) and the Association of Research Libraries (ARL) in response to the OSTP memo on public access to federally funded research in the US sounds a lot like the library-based publishing system I’ve been perpetually arguing for. It’s even in our paper on the pernicious consequences of journal rank. Could this be the initial step to break the stranglehold publishers have on scholarly communication? Here some key excerpts from the document:

universities have invested in the infrastructure, tools, and services necessary to provide effective and efficient access to their research and scholarship. The new White House directive provides a compelling reason to integrate higher education’s investments to date into a system of cross-institutional digital repositories that will be known as SHared Access Research Ecosystem (SHARE).

[...]

Universities already own and operate key pieces of the infrastructure, including digital institutional repositories, Internet2, Digital Preservation Network (DPN)2, and more.These current capacities and capabilities will naturally be extended over time. Universities have also invested in recent years in working with Principal Investigators and other campus partners on developing digital data management plans to comply with agency requirements.

[...]

University-based digital repositories will become a public access and long-term preservation system for the results of federally funded research. SHARE achieves the mission of higher education by providing access to and preserving the intellectual assets produced by the academy, in particular those that are made openly available.

[...]

Agencies that choose to develop their own digital repositories, or work with an existing repository such as PubMed Central, could simply adopt the same metadata fields and practices to become a linked node in this federated, consensus-based system. Discipline-based repositories, some of which are housed at universities, will be included.

[...]

The SHARE workflow is straightforward, and using existing protocols can be fully automated.

  1. PI or author submits manuscript to journal as currently occurs.
  2. Journal publisher coordinates peer review, accepts, and edits manuscripts as currently occurs.
  3. Journal submits XML version of the final peer reviewed manuscript (including the abstract) to the PI’s designated repository, or the author submits the final peer-reviewed and edited manuscript accepted for publication (including the abstract) to the PI’s designated digital repository.

In principle, this sounds almost verbatim like the system I advocate, with a few exceptions. Clearly, SHARE is still a ‘green’ OA route, meaning that regular journal publishing still occurs. I see no major issue with this, as some transition period will inevitably be required. The import part is that we wrestle at least some control over our literature back from the publishers.

I also find it important to point out the combined effort of these organizations to integrate the data mandates with the literature. Now we are only missing software requirements – I wonder why these are missing?

Another task not mentioned will be to integrate the back-archives of all the published literature into SHARE. Once we could get that incorporated, we’d potentially be able to offer a superior search, filter and discovery system than anything currently on the market – a system which I would guess to be crucial for weaning ourselves from publishers altogether, eventually.

In conclusion, this might be a very first, baby-step of our emancipation from corporate publishers. If we take the example of SciELO, and inspire  concerted action of a critical mass of institutions of higher education and research, we might just be able to achieve a fully functional scholarly communication system, perhaps even within this generation. Now is the time to provide our feedback to this draft. I think open access activists should get together and tell them what needs to happen.

What is the difference between text, data and code?

$
0
0
science politics

tl;dr: So far, I can’t see any principal difference between our three kinds of intellectual output: software, data and texts.

 

I admit I’m somewhat surprised that there appears to be a need to write this post in 2014. After all, this is not really the dawn of the digital age any more. Be that as it may, it is now March 6, 2014, six days since PLoS’s ‘revolutionary’ data sharing policy was revealed and only few people seem to observe the irony of avid social media participants pretending it’s still 1982. For the uninitiated, just skim Twitter’s #PLoSfail, read Edmund Hart’s post or see Terry McGlynn’s post for some examples. I’ll try to refrain from reiterating any arguments made there already.

Colloquially speaking, one could describe the scientific method somewhat shoddily as making an observation, making sense of the observation and presenting the outcomes to interested audiences in some version of language. Since the development of the scientific method somewhere between the 16th century and now, this is roughly  how science has progressed. Before the digital age,it was relatively difficult to let everybody who was interested participate in the observations. Today, this is much easier. It still varies tremendously between fields, obviously, but compared to before, it’s a world’s difference. Today, you could say that scientists collect data, evaluate the data and then present the result in a scientific paper.

Data collection can either be done by hand or more or less automatically. It may take years to sample wildlife in the rainforest and minutes to evaluate the data on a spreadsheet. It may take decades to develop a method and then seconds to collect the data. It may take a few hours to generate the data first by hand and then by automated processing, but decades before someone else comes up with the right way to analyze and evaluate the data. What all scientific processes today have in common is that at some point, the data becomes digitized, either by commercial software or by code written precisely or that purpose. Perhaps not in all, but surely in the vast majority of quantitative sciences, the data is then evaluated using either commercial or hand-coded software, be it for statistical testing, visualization, modeling or parameter/feature extraction, etc. Only after all this is done and understood does someone sit down and attempts to put the outcome of this process into words that scientists not involved in the work can make sense of.

Until about a quarter of a century ago, essentially all that was left of the scientific process above were some instruments used to make the observations and the text accounts of them. Ok, maybe some paper records and later photographs. With a delay of about 25 years, the scientific community is now slowly awakening to the realization that the digitization of science would actually allow us to preserve the scientific process much more comprehensively. Besides being a boon for historians, reviewers, committees investigating scientific misconduct or the critical public, preserving this process promises the added benefit of being able to reward not only those few whose marketing skills surpass the average enough to manage publishing their texts in glamorous magazines, but also those who excel at scientific coding or data collection. For the first time in human history, we may even have a shot at starting to think about developing software agents that can trawl data for testable hypotheses no human could ever come up with – proofs of concepts already exist. There is even the potential to alert colleagues to problems with their data, use the code for purposes the original author did not dream of or extract parameters from data the experimentalist had not the skill to do. In short, the advantages are too many to list and reach far beyond science itself.

Much like the after the previous hypothetical requirement of proofs for mathematical theorems, or the equally hypothetical requirement of statistics and sound methods, there is again resistance from the more conservative sections of the scientific community for largely the same 10 reasons, subsumed by: “it’s too much work and it’s against my own interests”.

I can sympathize with this general objection as making code and data available is more work and does facilitate scooping. However, the same can be said of publishing traditional texts: it is a lot of work that takes time away from experiments and opens all the flood gates of others making a career on the back of your work. Thus, any consequential proponent of “it’s too much work and it’s against my own interests” ought to resist text publications with just as much fervor as they resist publishing data and code. The same arguments apply.

Such principles aside, in practice, of course our general infrastructure makes it much too difficult to publish either text, data or software, which is so many of us now spend so much time and effort on publishing reform and why our lab in particular is developing ways to improve this infrastructure. But that, as well, does also not differ between software, data and science: our digital infrastructure is dysfunctional, period.

Neither does making your data and software available make you particularly more liable to scooping or exploitation than the publication of your texts does. The risks vary dramatically from field to field and from person to person and are impossible to quantify. Obviously, just as with text publications, data and code must be cited appropriately.

There may be instances where the person writing the code or collecting the data already knows what they want to do with the code/data next, but this will of course take time and someone less gifted with ideas may be on the hunt for an easy text publication. In such (rare?) cases, I think it would be a practical solution to implement a limited provision on the published data/code stating the precise nature of the planned research and the time-frame within which it must be concluded. Because of its digital nature, any violation of said provisions would be easily detectable. The careers of our junior colleagues need to be protected and any publishing policy on text, data or software ought to strive towards maximizing such protections without hazarding the entire scientific enterprise. Also here no difference between text,data and software.

Finally, one might make a reasonable case that the rewards are stacked disproportionately in favor of text publications, in particular with regard to publications in certain journals. However, it almost goes without saying that it is also unrealistic to expect tenure committees and grant evaluators to assess software and data contributions before anybody even is contributing and sharing data or code. Obviously, in order to be able to reward coders and experimentalists just as we reward the Glam hunters, we first need something to reward them for. That being said, in today’s antiquated system it is certainly a wise strategy to prioritize Glam publications over code and data publications – but without preventing change for the better in the process. This is obviously a chicken-and-egg situation which is not solved by the involved parties waiting for each other. Change needs to happen on both sides if any change is to happen.

To sum it up: our intellectual output today manifests itself in code, data and text. All three are complementary and contribute equally to science. All three expose our innermost thoughts and ideas to the public, all three make us vulnerable to exploitation. All three require diligence, time, work and frustration tolerance. All three constitute the fruits of our labor, often our most cherished outcome of passion and dedication. It is almost an insult to the coders and experimentalists out there that these fruits should remain locked up and decay any longer. At the very least, any opponent to code and data sharing ought to consequentially also oppose text publications for exactly the same reasons. We are already 25 years late to making our CVs contain code, data and text sections under the “original research” heading. I see no reason why we should be rewarding Glam-hunting marketers any longer.

UPDATE: I was just alerted to an important and relevant distinction between text, data and code: file extension. Herewith duly noted.

Evidence-resistant science leaders?

$
0
0
science politics

Last week, I spent two days at a symposium entitled “Governance, Performance & Leadership of Research and Public Organizations“. The meeting gathered professionals from all walks of science and research: economists, psychologists, biologists, epidemiologists, engineers, jurists as well as politicians, university presidents and other leaders of the most respected research organizations in Germany. It was organized by Isabell Welpe, an economist specializing in incentive systems, broadly speaking. She managed to bring some major figures to this meeting, not only from Germany, but notably also John Ioannidis from the USA or Margit Osterloh from Switzerland. The German participants included former DFG president and now Leibniz president Matthias Kleiner (the DFG being the largest funder in Germany and the Leibniz Association consisting of 89 non-university federal research institutes), president of the German Council for Science and the Humanities, Manfred Prenzel, Secretary General of the Max-Planck Society Ludwig Kronthaler, or the president of Munich’s Technical University, Wolfgang Herrmann, only to mention some of them. Essentially, all major research organizations in Germany were represented with at least one of their leading positions, supplemented with expertise from abroad.

All of these people shape the way science will be done in the future either at their universities and institutions, or in Germany or around the world. They are decision-makers with the power to control the work and job situation for tens of thousands of current and future scientists. Hence, they ought to be the most problem-solving oriented, evidence-based individuals we can find. I was shocked to learn that this was an embarrassingly naive assumption.

In my defense, I was not alone in my incredulity, but maybe that only goes to show how insulated scientists are from the political realities. As usual, there were of course gradations between the individuals, but at the same time there seemed to be a discernible grouping in what could be termed the evidence-based camp (scientists and other professionals) and the ideology-based camp (the institutional leaders). With one exception I won’t attribute any of the instances I will recount to any particular individual, as we better focus on the solutions to the more general prohibitive  attitude, rather than on a debate about the individuals’ qualifications.

On the scientific side, the meeting brought together a number of thought leaders detailing how different components of the scientific community perform. For instance, we learned that peer-review is quite capable of weeding out obviously weak research proposals, but in establishing a ranking order among the non-flawed proposals, it is rarely better than chance. We learned that gender and institution biases are rampant in reviewers and that many rankings are devoid of any empirical basis. Essentially, neither peer-review nor metrics perform at the level we expect from them. It became clear that we need to find solutions to the lock-in effect, the Matthew effect and the performance paradox and to some extent what some potential solutions may be. Reassuringly, different people from different fields using data from different disciplines arrived at quite similar conclusions. The emerging picture was clear: we have quite a good empirical grasp of which approaches are and in particular which are not working. Importantly, as a community we have plenty of reasonable and realistic ideas of how to remedy the non-working components. However, whenever a particular piece of evidence was presented, one of the science leaders got up and proclaimed “In my experience, this does not happen” or “I cannot see this bias”, or “I have overseen a good 600 grant reviews in my career and these reviews worked just fine”. Looking back, an all too common scheme of this meeting for me was one of scientists presenting data and evidence, only to be countered by a prominent ex-scientist with a “I disagree without evidence”. It appeared quite obvious that we do not seem to suffer from a lack of insight, but rather from a lack of implementation.

Perhaps the most egregious and hence illustrative example was the behavior of the longest serving university president in Germany, Wolfgang Herrmann, during the final panel discussion (see #gplr on Twitter for pictures and live comments). This will be the one exception to the rule of not mentioning individuals. Herrmann was the first to talk and literally his first sentence was to emphasize that the most important objective for a university must be to get rid of the mediocre, incompetent and ignorant staff. He obviously did not include himself in that group, but made clear that he knew how to tell who should be classified as such. When asked which advice he would give university presidents, he replied by saying that they ought to rule autocratically, ideally by using ‘participation’ as a means of appeasing the underlings (he mentioned students and faculty), as most faculty were unfit for democracy anyway. Throughout the panel, Herrmann continually commended the German Excellence Initiative, in particular for a ‘raised international visibility’ (whatever that means), or ‘breaking up old structures’ (no idea). When I confronted him with the cold hard data that the only aspects of universities that showed any advantage from the initiative were their administrations and then asked why that didn’t show that the initiative had, in fact, failed spectacularly, his reply was: “I don’t think I need to answer that question”. In essence, this reply in particular and the repeated evidence-resistant attitude in general dismissed the entire symposium as a futile exercise of the ‘reality-based community‘, while the big leaders were out there creating the reality for the underlings to evaluate, study and measure.

Such behaviors are not surprising when we hear them from politicians, but from (ex-)scientists? At the first incidence or two, I still thought I had misheard or misunderstood – after all, there was little discernible reaction from the audience. Later I found out that not only I was shocked. After the conference, some attendees discussed several questions: Can years of leading a scientific institution really make you so completely impervious to evidence? Do such positions of power necessarily wipe out all scientific thinking, or wasn’t all that much of it there to begin with? Do we select for evidence-resistant science leaders or is being/becoming evidence-resistant in some way a prerequisite for striving for such a position? What if these ex-scientists have always had this nonchalant attitude towards data? Should we scrutinize their old work more closely for questionable research practices?

While for me personally such behavior would clearly and unambiguously disqualify the individual from any leading position, relieving these individuals from their responsibilities is probably not the best solution. Judging from the meeting last week, there are simply too many of them. Instead, it emerged from an informal discussion after the end of the symposium, that a more promising approach may be a different meeting format: one where the leaders aren’t propped up for target practice, but included in a cooperative format, where admitting that some things are in need of improvement does not lead to any loss of face. Clearly, the evidence and the data need to instruct policy. If decision-makers will be ignoring the outcomes of empirical research on the way we do science, we might as well drop all efforts to collect the evidence.

Apparently, this was the first such conference on a national level in Germany. If we can’t find a way for the data presented there to have a tangible consequence on science policy, it may well have been the last. Is this a phenomenon people observe in other countries as well, and if so, how are they trying to solve it?





Latest Images