
Last weekend the European Association of Science Editors had their conference in Split, Croatia. For me it was an opportunity to look behind the scenes of many journals at once to see what was coming up in other areas of science editing.
Social Media
AoB Blog sent me there for the social media session. I spoke with Sarah Linklater of The Lancet about social media for journals. It was a good example of a presentation leading on to discussion. For example, I’m keen on journals making things as shareable as possible. Possible is a difficult word, because Open Access journals, like AoB PLANTS can share more than Annals of Botany – which is subscription based. Even so, could we agree that images should be shareable? It turns out we couldn’t.
There is a good reason for this. Hannah Cagney, also at the Lancet, pointed out that patients might give consent for images to be used for medical research. However sharing is complicated because consent for images in a research paper isn’t the same allowing the gruesome results of your infection to go viral on Facebook.
Despite that, it seemed there was an appetite to share what people could. Buzzfeed, once a few people got their heads round it, seemed popular. I’m hoping this will lead to a few more journals expanding what they do on Social Media, including other plant science publishers.
Sexism and Gender
I went to the Sexism / Gender session. The plan is that EASE can produce some standards on dealing with sex and gender that journals can sign up to. There’s the obvious issue with whether women are under-represented in editing, peer-review and so on. The other the session tackled was whether sex/gender issues are under-reported. Because of how the room divided, this was what I spent most time in.
So how much sex/gender reporting do you need in a scientific article. The headline story from earlier in the year is that you’ll get a different result to your experiments with lab mice, depending on whether men or women are handling the mice. How many people would think to include the sex of the experimenters in their write up. Without this data how much information are we missing that could have major consequences?
Another factor is that males and females can react in different ways to interventions and experiments anyway. A drug that works on male rats might not work on female rats so well. If the only results published are the male rat versions, this will skew results. The proposal put forward is that if samples are single-sex they should say so in the article titles. This needs some tweaking. AoB’s Rod Hunt pointed out that a study about male testicular cancer is redundant, but when they’re polished, the guidelines might help highlight areas where we are simply missing data and didn’t realise it.
I found this session depressing. Partly because it’s the 2010s not the 1960s but we’re still having to deal with issues that have been needed attention for half a century. For a profession that is supposed to challenge received wisdom on a daily basis, it’s painfully conservative. The other thing was a concern that standards would be seen as seen as political correctness. It’s basic information, and recording it doesn’t sign you up to a women’s rights movement. If there’s a perception it does, then that points to deeper problems at a journal.
Metrics
Saturday night I avoided alcohol and went to bed early so I could be up first thing for the Sunday sessions. Thanks to heat and tonsillitis, I needn’t have bothered. On the last day I just attended the one session, because the room was spinning and my head was throbbing. As a result I didn’t get everything out of the Metrics session that I could have.
One of the first targets was the Journal Impact Factor. A Journal Impact Factor is a ratio of the number of times articles are cited against the number of articles published. The Journal Impact Factor measures impact, not quality, and it is being misused on a regular basis to assess individual scientists. It can also be gamed, sometimes unintentionally, sometimes not. An author’s legal suggestion that his paper should be cited gave an unexpected boost to the impact factor of Acta Crystallographica.
In the search for better metrics, speakers talked about various altmetrics. There was also a discussion of citing bioresources (PDF). How do you cite and credit data? It’s a problem the people at FigShare would be interested in too. Sadly by this stage my headache in Split was a splitting headache, so I wasn’t able to give this the attention it deserved.
Elsewhere
If the conference were purely about the sessions, I’m not sure the conference would be necessary. Every could have just swapped papers. However, outside the sessions I met some interesting people.
The biggest issue I heard was one we don’t have in the UK. Here we have politicians who might have a poor grasp of science. On both sides there are cases where politicians just dismiss anything that doesn’t suit them. In other parts of the world governments have turned actively hostile to science. We vote for different parties at AoB Blog, so there’s no interest in pushing a party line here, we can stick to science. I’m hoping what we see elsewhere is isn’t a sign that sooner or later reporting reality is seen as inherently left-wing or right-wing.
It may mean there’s plenty to discuss for the theme of the 2016 conference in Strasbourg: Scientific integrity: editors on the front line.
