What is: Open Access publishing?

Peer review is generally accepted to be the least-worst way to generate trust in the scientific publishing process. By allowing experts in the field to read, critique, confirm, challenge and improve your work before it enters the mainstream body of science, poor quality or erroneous work should be filtered out before it gets the chance to distort public perception and policy. However, it’s not without its critics. Anonymous reviews allow reviewers to partake in spiteful and/or personal attacks which do nothing to improve the science behind the work, and can delay or even prevent publication of perfectly acceptable work. Also, the system is based on a financial system that only seems to benefit the publishing companies. In traditional journals, scientists relinquish their copyright to a company that then charges them and their colleagues to read the work, restricting access to those in universities or with big budgets (individual papers can cost $30 or more, subscriptions run to the thousands). Journal reviewers and editors work for free, considering the process as part of their community obligation despite the for-profit journal getting the real benefits.

Recently, open-access publishing has started to change the way that ordinary people can read the research that they, through their taxes and charity donations, have paid for, but the business model for the publishers is generally similar. In a typical open access publishing workflow, the researchers submitting the paper pay an “article processing charge” (APC) once the work has been accepted. Paying this charge, which is often £1000 or more, allows them to retain the copyright, and lets anyone, anywhere in the world, read the paper for free. It shifts the costs for access from the distributed consumers, who would often lack the resources to pay for the research, to the universities producing the work in the first place. Most research grant bodies now request open access publishing, and have provided some funds to cover the APCs, for now at least. Reviewers and editors are still unpaid, and publishers still make a profit, in fact since many articles are not open access then universities (i.e. taxpayers) are on the hook for both the journal subscriptions and the APCs.

So why are researchers still paying these companies such large amounts of money (profit margins are amongst the best of any industry)? Well, academic promotion is mostly decided by your publication history; the easiest way to judge a publication history is to look at the journals that a researcher publishes in, rather than reading the papers themselves. Therefore the pressure is on, especially for young researchers, to publish in the most prestigious journals, and they tend to be the most expensive ones, where articles are either restricted access or have high APCs.

Recently, there has been a shift towards more open and accountable publishing systems, with journals allowing researchers to publish either draft versions or even the finished paper on their website without violating copyright. Many universities have created online repositories to let researchers store and share their work (mine is available through my Manchester profile page), which are imperfect, since it’s often hard to find the papers, but better than nothing.

Even within my short career, the way that people publish and access science has changed; open publishing is still in development and it’s likely that further innovation in the next 5-10 years will change the landscape even further.

Canadian permafrost as a source of easily-degraded organic carbon

The February issue of “Organic Geochemistry” will include a paper by David Grewer and colleagues from the University of Toronto and Queen’s University, Canada which investigates what happens to organic carbon in the Canadian High Arctic when the surface permafrost layer slips and erodes. This is a paper that I was involved in, not as a researcher but as a reviewer, helping to make sure that published scientific research is novel, clear and correct.

Map of Cape Bounty in the Canadian High Arctic
Map of Cape Bounty in the Canadian High Arctic

The researchers visited a study site in Cape Bounty, Nunavut, to study a process known as Permafrost Active Layer Detachments (ALDs). The permafrost active layer is the top part of the soil, the metre or so that thaws and re-freezes each year. ALDs are erosion events where the thawed top layer is transported down the hillslope and towards the river. Rivers can then erode and transport the activated material downstream towards the sea.

The team used organic geochemistry and nuclear magnetic resonance spectroscopy to find out which chemicals were present in the river above and below the ALDs. The found that the sediment eroded from the ALDs contains carbon that is easily degraded and can break down in the river, releasing CO2 to the atmosphere and providing food for bacteria and other micro-organisms in the water.

Automated Analysis of Carbon in Powdered Geological and Environmental Samples by Raman Spectroscopy

The first paper produced directly from my PhD research was published last month in the journal Applied Spectroscopy. Automated Analysis of Carbon in Powdered Geological and Environmental Samples by Raman Spectroscopy describes a method I developed for collecting and analysing Raman Spectroscopy data, along with Niels Hovius, Albert Galy, Vasant Kumar and James Liu.

I will discuss Raman Spectroscopy in depth in a future post on this site, but the short version is that Raman allows me to determine the crystal structure of pieces of carbon within my samples. A river or marine sediment sample can be sourced from multiple areas, and mixed together during transport. Trying to work out where a sample was sourced from can prove very difficult. However, these source areas often contain carbon of different crystalline states; if I can identify the carbon particles within a sample then the sources of that sample, even if they have been mixed together, can be worked out. The challenge in this procedure is that there can be lots of carbon particles within a sample, and each one might be subtly different. To properly identify each mixed sample, lots of data is required, which can laborious to process.

Each spectrum is classified according to its peak shapes.
Determining the types of carbon in a sample. Each spectrum is classified according to its peak shapes. Image (C) Applied Spectroscopy

My paper describes how lots of spectra can be collected efficiently from a powdered sediment sample. By flattening the powder between glass slides and scanning the sample methodically under the microscope, around ten high-quality spectra can be collected in an hour, meaning five to ten samples can be analysed in a day. Powdered samples are much easier to study than raw, unground, sediment, and I have shown that the grinding process does not interfere with the structure of the carbon particles, therefore it is a valid processing technique.

Once the data has been collected, I have devised a method for automatically processing the collected spectrum using a computer, which removes the time-consuming task of identifying and measuring each peak by hand. The peaks that carbon particles produce when analysed by Raman Spectroscopy have been calibrated by other workers to the maximum temperature that the rock experienced, and this allows me to classify each carbon particle into different groupings. These can then be used to compare various samples, characterise the source material and then spot it in the mixed samples.

Delegating as much analysis as possible to a computer ensures that each sample is treated the same, with no bias on the part of the operator, and also cuts down the time required to process each sample, which means that more material can be studied. The computer script used to analyse the samples is freely available and therefore other researchers can apply this to their data, enabling a direct comparison with any samples that I have worked on. This technique will hopefully prove useful to more than just my work in the future, and anyone interested in using it is welcome to contact me. While the paper discusses my application of the technique to Taiwanese sediments, I have already been using it to study Arctic Ocean material as well.

The paper itself is available from the journal via a subscription, and is also deposited along with the computer script in the University of Manchester’s open access library.

 

 

Subducted seafloor relief stops rupture in South American great earthquakes

This was my first published paper, based on research undertaken during my Masters. It is published in Earth and Planetary Science Letters, and available for download via ResearchGate and the MMU e-space repository.

The initial observation for this work was that ‘great’ earthquakes, those which measured more than magnitude 8.0, tend to have end points in the same place. An earthquake end point is the limit to the earthquake fault plane movement, shaking will take place outside of this zone but it is most violent in the regions above the fault plate movement. The figure below shows the rupture zones and end points for great South American earthquakes.

Rupture zones and subducting topography in South America
Rupture zones and subducting topography in South America

An initial inspection of the plate margin (where the incoming Pacific Plate is subducted underneath the South American Plate) suggested that when there are underwater mountains on the Pacific Plate coming into the subduction zone (black blobs on the figure above), these tend to match up with the end points of earthquakes. The figure below shows how topographic features (underwater mountains and ridges) match up with earthquake locations.

Subducting topography and earthquake locations
Subducting topography and earthquake locations

To test whether this was a real relationship, or just coincidence, I designed a model that produced earthquakes along the subduction zone. The model had two versions. In version one, earthquakes were placed in line along the subduction zone (as tends to happen in these situations, one earthquake starts at the end point of a previous one) but their end points were not constrained by anything. In the second version, if the earthquake tried to rupture past an incoming topographic feature, the rupture was stopped there.

By applying statistical tests to the model, I showed that the endpoints of earthquakes were far more likely than random chance to be located where topography > 1000m was being subducted. The model also showed that subducting topography led to a reduction in background earthquakes. At earthquake endpoints not associated with topography, there was an increased amount of smaller earthquakes releasing the built-up stress, but this was not seen at earthquake endpoints near to subducting topography. Therefore, subduction of a high seamount or ridge makes the subduction zone earthquake activity decrease (it becomes ‘aseismic’) which prevents both great earthquakes getting past, and smaller earthquakes from occurring.