Some sporadic insights into academia.
Science is Fascinating.
Scientists are slightly peculiar.
Here are the views of one of them.
Buy My Book

Sunday 13 March 2016

Shiny, shiny, shiny beads of latex

Part of the reason I got into science blogging, was to drum up more readership for scientific papers. The main reason was to share my comedy genius with the world (193 Twitter followers can’t be wrong: that’s almost as many people as live on Tristan da Cunha!). Oh and the need to feed my ego, obviously a pretty major component.

I’m too sexy for my lab book

Sometimes, however, it is hard to think of a way to sell my research to you, the great unwashed. Some science sells itself by being cutting edge, earth shattering, game changing. This is the stuff all the scientists in the field read, the papers where the authors are known by name. It’s the science  that leads to oversold newspaper headlines promising to “Cure cancer in 3 years” and staged interviews with senior Profs in the lab for the first time in years, normally wearing someone else’s lab coat with assorted minions pipetting colourful liquids in the background.

Journeyman science

There is other work that is less headline-worthy, but equally useful. It could build upon the work of others, applying novel methodology from the big papers to different areas. Alternatively it may reproduce work performed elsewhere, increasing the validity of the original findings. Or it could be small incremental steps that adds to the overall pool of knowledge, laying the foundations for future work. 99% of the scientific literature falls into this bracket.

The premiership of publications

How do we judge the quality of scientific research? Part of the science writing process is to refer to other published studies that support our findings. Sometimes we look for papers that draw completely opposite conclusions so we can make passive aggressive comments about why their study is no good. Each mention of another study in a paper is called a citation. As a rule of thumb, papers that are cited more have made a bigger splash (good or bad). My current record is 279 for my ‘classic’ work on edible vaccines in transgenic plants, a corker if I say it myself. You might imagine that the highest cited papers would be those sexy headline grabbers as these are the ones that get the most attention. Though this is using sexy in a very loose sense, no one ever got tumescence from a western blot (deeply lame, unnecessarily crass science joke number 73).
Transgenic tobacco plants doing blue steel

Method to my madness

However, this is not the case. Of the top 10 cited papers of all time, seven are methods papers. These are very important, but somewhat dry descriptions of how to do science. Believe me, if you weren’t finding the ‘sexy’ papers sexy, methods papers will not be doing it for you. Which finally brings me around to our latest paper: ‘Development of a custom pentaplex sandwich immunoassay using Protein-G coupled beads for the Luminex® xMAP® platform’ in the Journal of Immunological methods. In this page-turner we describe a better way to do Luminex. Now you will either know what Luminex is, in which case read the paper or not, in which case may I recommend something from my back catalogue like a fruit number on early life antibody or a saucy look at mouse genetics. If neither of these whet your whistle, click the advert at the bottom – I get 5p a click, Academics got to live after all!
Are you using protein G to couple your Luminex beads? If not why not

EU Supports science

As a final piece of drum banging. This work was funded by our good friends in the European Union, for which I say Merci Beaucoups. No EU = no papers about niche analytical techniques in immunology and we can all agree that would be a shame. That and the fact that for every pro brexit vote a fairy dies. 

Saturday 12 March 2016

See the world in a grain of sand

Use of the microparticle Nano-SiO2 as an adjuvant to boost vaccine immune responses in neonatal mice against influenza.


There is a paradox at the heart of vaccine development, the people need the vaccines the most are one in whom the vaccines work the least. This is particularly the case for the elderly, the immunocompromised (people whose immune systems are impaired for some reason) and the patient group my research focuses on, the very young.

Baby Steps

In our first 28 days of life (known as the neonatal period) we are particularly susceptible to infection. In part, this is because neonates have not previously been exposed to infections – the womb, for the most part is sterile, and so haven’t built up a protective immune memory. However, this is not the only difference in the neonatal immune system. When the body is first infected, there is an early wave of responses called the innate system. For reasons unknown (though I am applying for funding to solve this of this highly important question – funders take note) this early innate immune system in early life is less responsive than later in life. In particular, babies are not very good at spotting bacterial infections.
This lack of immune response enable infection of babies, bacteria can get a greater toehold prior to be being recognised. But additionally this has an impact on vaccine responses. Vaccines, as you may or may not know, work by tricking the body into thinking it has been infected, leading it you build up a protective memory that prevents subsequent infection with the real bug. This trickery is achieved by either smashing the bugs into little bits or by using weakened (attenuated versions). Those of you paying attention will know how we have been researching methods to weaken viruses and make better vaccines; if not now’s the time to catch up.

Super Sand

However, given babies struggle to detect full fat pathogens they do even worse with the lite, vaccine versions of bacteria. This is where our current research comes in. It is possible to pep up a vaccine by adding substances called adjuvants. Adjuvants are often made of bits of bacteria, which trigger the immune system through a family of receptors called the TLRs (Nobel Prize fans – which year and who won the Nobel for discovering this – bonus point for knowing which animal they were found in). But as you now know, babies don’t recognise bacteria and therefore bacterial and therefore bacterial derived adjuvants are ineffective in early life. We took a different tack. The immune system, doesn’t just recognise infection, it recognises damage to cells. This cellular damage is detected by something called the inflammasome. We therefore tried a number of compounds which have been described as inflammasome activators. We settled on three – NanoSiO2 (which is really, really tiny grains of sand), CPPD (Calcium pyrophosphate dehydrate which causes an arthritis like disease called pseudogout) and the M-Tri-Dap (or N-acetyl-muramyl-L-Ala-γ-D-Glu-meso- diaminopimelic acid, part of the bacterial wall). We combined the compounds with flu vaccines to test whether it would improve the response. Amazingly mixing flu with sand, significantly improved the response. But then again, maybe it wasn’t that surprising as everyone knows, sand gets everywhere, is irritating and leads to a strong reaction ‘No you may not have a sand pit’. You’ll find all the details here.

Science, it's good for EU

This project was funded by the EU, which is something to consider very carefully – if we Brexit no-one would fund us to put sand in vaccines and the world would be a worse place. Britain is a huge net beneficiary of EU science. Not only in monetary value, but also in access to the best minds, facilities and resources.

Friday 11 March 2016

Better experimental design through statistics

For my birthday, I want to go to a seminar about the better use of statistics in animal experiments, said no 9 year old, ever. However, add 30 years and there I was. So why, instead of going to the zoo, was I at a stats seminar?

The fault in our p’s

There is increasingly recognised to be a problem with reproducibility in all areas of biological research. This is a particular issue when it comes to studies using animals. Poor reproducibility has contributed to a failure to turn pre-clinical discovery studies into successful medicines. There are a number of causes of this problem, including unreliable research reagents, confirmation bias and the flawed use of statistics. A lot of which can be overcome with better experimental design, in particular better stats. As best I understand it (and am not a statistician) using p<0.05 means that 1 in 20 studies is ‘significant’ by chance and therefore we see results as positive when they are not. Due to some complicated statistical sleight of hand involving alpha, beta and normal distributions, it could be the case that as much as 60% of significant results are in fact untrue (though it would be better to follow some actual statisticians to check how this works).

Times are a changin’

But why should you care? It is/ ought to be a given that we should all do better science. However career pressures – the need to publish positive results in glossy journals may lead to practices that are not in line with best scientific practice. This can lead to a number of tricks that are used consciously or unconsciously to make a more positive story – p-Hacking, HARKing (Hypothesis after result known) etc. However publishing is not the only career pressure, we all need to bring in grant funding. Both the UK and US funders are using this as a tool to change research practice. The funders have updated their guidance (The MRC and NC3Rs guides are here) to ensure more rigour in experimental design in grant applications.

Je-S Kidding

In 2012, an assessment by review board members about the quality of the justification for animals found that the reason for animal usage and selection of species was ok, but the statistical justification was either absent or plain wrong. It used to be that grants could be awarded conditionally, subject to amendment if there were issues with the statistics.  As of next year, grants can be rejected if the quality of the justification of animal usage is not good enough, without the opportunity to amend the application post award. Let me repeat that, because we are all inclined to spend 90+% of effort in perfecting the case for support and then fill the rest of the form out in a mad dash. Grants may be REJECTED, without a chance to amend post-award, if the case for animals is poor. Stating ‘We did this because we always have’, won’t cut it anymore.

Get it right

What follows are some steps that can help you improve the stats part of the justification for animal usage. Obviously this will not automatically get you funding – if it did, I’d be unlikely to share it, would I. But hopefully it will help you frame your application in a more clear way. NB don’t spend all your word count on the stats, you still need a case for animal usage/ species etc.

Step 1 – PICO. Put a single sentence at the beginning of the animal justification explaining in brief what you are aiming to do. Try using PICO: P (population) – to who, I (intervention) – what will you do, C (control) – what will you compare against, O (outcome). Consider your unit of measurement, e.g. 100 sections from a single liver is still n=1.

Step 2 – describe the effect size. What is the biological effect you are looking for, ideally in a human, this should be drawn from your own experience of the disease area. One of the speakers made the very good point that a positive outcome in animal studies is seldom the actual endpoint we are interested in, i.e. human disease. Think about what a real world biologically significant effect would be and justify why that would be important in a patient. This should be informed by your expertise in the area. An example using blood pressure – 160 mm is bad, 120 mm is good, the ‘effect size’ of a blockbuster drug would be a 40mm drop in blood pressure. (Effect size can be modified by variability – but use a stats package or check with a statistician).

Step 3 – Using your real world effect size, perform a power calculation. Power calculations enable you to give an actual value for the number of animals needed for each study that will lead to real, reproducible results. ‘We use n=6 because everyone else does’ or ‘We use n=5 because that’s how many fit in a cage’ apparently is not good enough. There are four main elements for a power calculation: you need to state all 4, and justify why. The 4 parts are – level of type 1 error (false positives, α or p value normally 0.05), level of type 2 error (false negatives, β normally 20%), the effect you are looking for (see above) and variability (this has to be based on real world data – yours or from literature). There are lots of stats packages that once you have this info can work out the power for you, this one works.

Step 4 – How will you reduce unconscious bias – use the ARRIVE guidelines as a checklist, try the experimental design assistant. Think about how you will blind, normalise, randomise etc. There are some worked examples on the MRC site which might help.

Step 5 GET ADVICE FROM A STATISTICIAN. Did I mention I am not a statistician? If you are reading this as your sole guidance you are in trouble! There are multiple caveats with the approach I have suggested: it is clearly not going to work for all cases, discovery science/ hypothesis-free work will need other approaches, multiple time points and repeated sampling change the weighting of the p value (think multiple coin tosses – the chances of 6 heads is the same as the chance of H,T,T,T,H,H). Since most programs of work are complex and multi-endpoint it is not possible to do a detailed power calculation for each part, one approach would be to do the analysis for the major arm of the work to demonstrate you know what you are doing.

On the whole my birthday trip to the stats seminar was, surprisingly, much better than a trip to the zoo (then again I don’t really like zoos that much). It was very thought provoking about improving experimental design and a step towards better more reproducible science as a community.

The seminar was organised by the NC3Rs and MRC, but I am writing on my own behalf and the opinions stated here are mine.

Tuesday 1 March 2016

This career will self-destruct in 5 minutes: how not to slip on your own banana skins

I spend a considerable amount of my time giving unsolicited advice to people. In fact I have so much to say that it has spilled onto the internet – to which stream of consciousness you are now the lucky recipient. One of my recurring tropes is the need to have confidence in your own ability and how best to demonstrate your abilities to others (sellyourself without selling out). Given this you might imagine I am sharp suited and fake tanned with shiny white teeth; just brimming with confidence, leaning in, owning the room, like a boss. Sadly not.

How not to conduct a phone interview when you have the whip hand

My most recent failure to practice what I preach came earlier this week. Bear with me, this isn’t just an act of catharsis, some advice will follow.  I’d been approached as an ‘expert’ in my field (NB they approached me – this is important). Theoretically, the person doing the approaching is the one who needs the service, I was in the position of power for any negotiation. And yet, even prior to the conversation I was thinking ‘why me, surely there are better qualified people out there’. There followed a disastrous phone call to discuss my availability for the role (second point to note – the phone call was to check my availability and interest: not an interview).

How not to snatch defeat from the jaws of victory

Where did I go wrong? Is there anything that I did/ didn’t do that might help me/ others in the same position in the future?

Toxic comparisons

At times, I doubt my legitimacy to be in the position I am in. At one point in the phone call I even said ‘I don’t think I am the best person for this’. I am not alone in this feeling, it even has a name: ‘imposter syndrome’. Smarter, better qualified people (see I cannot stop myself) have written longer, better pieces about how to cope with imposter syndrome. I have one favourite stolen piece of advice, that I often come back to, be careful who you compare yourself to. There is a vacuum of career management in academia. We get little feedback and often benchmark ourselves against others to try and measure success. Unless you are a Nobel laureate, such comparisons never work out favourably. I know that based on these unrealistic comparisons, I judge myself too harshly and consequently fail to sell my good sides. The solution: Don’t compare up. If you need feedback, ask your Head of Department or a mentor or a friend. And if you have to compare, remember everyone has to start somewhere; pick a superstar academic, rewind PubMed to the beginning of their career and (hopefully) you will see they are human too.

Scientific Method

I cannot possibly know all the answers, because no one can possibly know all the answers. Part of our training as scientists is to question everything, including ourselves. Not trusting anything until we have seen it at least 3 times is entirely appropriate for dealing with complex data sets and other scientists, but does not translate well into dealings with the outside world, where cautious hedging can often interpreted as doubt. The solution: Overcome your training and make sweeping statements with absolute confidence, when appropriate.

Very British Problems

Being softly-spoken, understated and British, whilst highly effective in romantic comedies, doesn’t necessarily get you ahead in the world of work. Imposter syndrome is not restricted to the British, but there are cultural characteristics that can make it worse. Years of cultural conditioning, being told not to show off and the teasing ‘banter’ between friends that picks up on any self-aggrandisement can all lead to us downselling our achievements. There are also cultural euphemisms – if I say ‘yeah I can probably do that’ I mean ‘Yes I can do that easily’ but what can be heard is ‘I cannot do this’. The Solution: Be more direct in what you say, especially when crossing continents.

#Overlyhonestmethods

In addition to downselling my achievements, I upsell my flaws. Obviously lying is wrong, but underselling yourself isn’t great either. In fact you have a moral duty to ‘big yourself up’. There are many people who entirely lack self-doubt, who will apply for everything, regardless of their actual ability to do it. Being chock full of confidence they will often get the job, especially if you, don’t apply with confidence. These alpha types, lacking self-doubt, can do more damage in any role than the quiet, questioning British scientist. If we, the modest majority, do not stand up and speak up, bad decisions will be made. It is a moral duty, because bad decisions cost money (investors, tax-payers, donors), lead to inappropriate clinical trials (with actual risk to people or animals) and wasted research effort (and the damage that can cause to people’s careers).

What have I learnt?

  1. It is much easier to give advice than to follow it.
  2. Phone conversations with potential employers are no place for self-effacement or cultural euphemisms.
  3. When approached as the expert, act like it.
  4. Being quietly awesome at something isn’t enough, you need to tell people how awesome you are!
  5. Asking whether you are the perfect person for the role is the wrong question. My fellow scientists, ask not, am I the perfect person for this role, ask is this role perfect for me