Interviewed by LabWorm for NCG

Our group has been interviewed by LabWorm regarding our recent publication on Network of Cancer Genes 5.0.

I absolutely love the “artist impression” they made of our team:

The NCG team sketched by LabWorm. Thanos Mourikis, me, and Omer An.

LabWorm is a collaborative platform for sharing tools and links related to bioinformatics. They have a very modern and interactive user interface, and they are very active in adding new links and involving people in the platform.

Over my too many years of experience in the bioinformatics field, I saw many attempts at creating collections of bioinformatics tools. Unfortunately many of these failed because of lack of interest or lack of maintenance. However LabWorm seems to be doing things right for the moment, as they really work hard to engage people in their community, and they even publish some blog interviews to researchers.

The bioinformatics community really need a effective way to share tools and links, and I really hope that LabWorm will be successful in their attempt.

Are fitness genes more conserved across species? my 30-minutes attempt

A recently published paper by Hart et al presented a genome-wide CRISPR screening to identify fitness genes (a superset of essential genes) in five cell lines. The paper is quite impressive and shows the potentiality of CRISPR to generate large scale knockouts and to characterize the importance and function of genes in different conditions.

In the discussion the authors propose that fitness genes are more likely to be more conserved across species. However they do not follow-up on this hypothesis, probably for lack of space. They can’t be blamed as they already present a lot of results in the paper.

Distribution of conservation scores in the phastcons.100way.UCSC.hg19 track. Are essential genes more conserved than other genes?
Distribution of conservation scores in the human genome. Are essential genes more conserved than other genes?

This post presents a follow-up analysis on the hypothesis that fitness genes are more conserved than non-essential genes. I’ll take the original data from the paper, get the conservation scores from bioconductor data packages, and do a Wilcoxon test to compare the two distribution. The full code is available as a github repository, and please feel free to contribute if you want to do some free R/Bioconductor analysis.

Continue reading

Tutorial on working with Genomics data with bioConductor – part I

BioConductor includes many powerful packages for working with genomics data. You can do pretty much everything, from downloading gene coordinates and sequences of any model species, to converting gene ids and symbol, and to accessing ENCODE data and anything in UCSC, Ensembl, and other resources. However these packages are not always well known,  and the initial learning curve is a steep, specially for R beginners.

This series of tutorials will describe how to get gene coordinates from bioconductor, intersect these with some interesting dataset from ENCODE, and do an enrichment analysis with DOSE. It will be fun ūüôā

Libraries required for this tutorial

For this tutorial we will use only Human data. Most of the packages needed for working with human genomics can be installed with a single command:

The Homo.sapiens package is a container that includes the most important packages for working with human data. In particular it contains two data packages:

  • TxDb.Hsapiens.UCSC.hg19.knownGene is a TxDb object containing the coordinates of all the genes, transcripts, and exons in the human genome
  • org.Hs.eg.db is what you need to convert all gene ids – from entrez to ensembl, to GO, and so on.

The data in these packages is updated periodically (I think every 6 months), and is pretty stable, meaning that anybody using the same packages and version should be able to reproduce the same results. An alternative to using these data packages is biomaRt, but I prefer the data packages as they can be used without internet connection.

The package AnnotationHub is used to retrieve data from multiple sources and will be described later. The BSgenome package is for retrieving the human genome sequence: we will not use it in the tutorial but I included it for completeness.

Note that I also loaded the dplyr package for this tutorial. Although dplyr is not needed for working with genomics data, I consider it one of the most useful packages in R, and this tutorial will make heavy use of it. I apologize if this tutorial is not easy to follow to people not familiar with dplyr.

Retrieving gene and transcript coordinates

The TxDb object can be used to retrieve coordinates of genes, transcripts, and exons in the human genome. For example, we can access all human transcript with the transcript() function:

See help(transcripts) for other functions that can be applied to a TxDb object. For example, genes() retrieve coordinates of genes, while exons() and promoters() work similarly.

In the example above I also specified a “columns” parameter, in order to show the gene id as well. You can use this column to get the coordinates of a specific set of genes. For example, the following will retrieve the coordinates of the genes corresponding to entrez ids 1234, 231, and 421:

Converting Entrez IDs to symbols and other IDs

One of the most tricky part in bioinformatics is converting gene ids to symbols and other ids. Many errors can be made in this process, and is therefore very important to have a consistent way to convert gene ids.

Luckily, we can use the org.Hs.eg.db for easily converting many ids. This package should already have been loaded with library(Homo.sapiens). To see all the possible conversion tables (bimaps) available, we can either to library(help=org.Hs.eg.db) or simply write “org.Hs.eg” and then hit tab on the R command line .

One of my favorite bimaps is the one to convert gene symbols to entrez. As you may know, for historical reasons the same gene can have more than one symbol. This usually complicates things a lot, and a safe procedure is to always convert symbols to entrez before starting any analysis. The ALIAS2EG bimap is there for this type of conversion:

When you convert symbols to id, it is important to remember that not only the same gene can have more than one symbol, but also the same symbol can match multiple entrez ids. For example, here is the code to get which symbols match more than one entrez id in the human species:

Note: the “%>%” symbol and the count, arrange functions come from the dplyr package.

The only safe thing to do in these cases is to identify the duplicated symbols and either discard them or manually curate them. Let’s imagine that a fellow researchers asked us to retrieve information for the genes ACT, DOLPP1 and MGAT3. We can use the bimap to identify the genes that map to multiple entrez id, and then go back to our colleague and ask him to tell us which are the correct ids.

In these examples I converted the bimap to a dataframe and then did an intersection. However the “bioConductor” way to use these bimaps is through the select function:

One big problem with these bioconductor packages is that they clash with many dplyr functions. For example, the select function gets overwritten if you load dplyr after Homo.sapiens, and the only option to avoid headaches is to explicitly refer to the function as AnnotationDbi::select. These conflicts in the namespace can cause a lot of confusion in R, because they let to weird error messages that are completely unrelated to the real problem.

In any case, the advantage of the select function is that it allows to retrieve more id types at the same type. For example here I retrieved both entrez id and ensembl ids, and if you type columns(org.Hs.eg.db) you will be able to see many other possible output columns.

Next parts of the tutorial

I was originally planning to write one big tutorial in the same post, but now I see that it would be much more readable if I split it into multiple posts.

Please let me know if you have any comment regarding this first tutorial, and I will try to improve it and take the feedback into account for the next parts.

A tutorial on organizing and plotting data with R

Every year I teach in the Programming for Evolutionary Biology course held in Leipzig. It’s an intensive three weeks course, in which we take 25 evolutionary biologists with no prior knowledge of programming, we lock them in a room together with some very good teaching assistants, and we keep explaining them how to program until they learn or manage to escape.

Jokes aside, the course is a very nice experience, and people have a lot of fun, as the three weeks are full of discussion about evolutionary biology and about bioinformatics. The nice things is that this year two brilliant former students (Karl and Michiel) organized a whole reunion conference, which has been called the PEB conference and is taking place this week at the CIBIO near Porto. This reunion conference is also an opportunity to follow-up the students, and I have been in charge of organizing a couple of workshops, one about installing software on linux, and one about advanced R programming.

The tutorial for advanced R programming is available on github and below. I think it may be of interest for anybody with some knowledge of R, but wishing to learn some new tricks. In particular the tutorial is about good ways to organize a dataframe, and I tried to cover a few beginner mistakes about data analysis that I saw in Biostar. It describes the differences between a dataframe in a wide or a long format, how to convert one to the other, and what are the advantages of doing that. It also teaches how to calculate group-based summaries with dplyr, and how to plot them with ggplot2.

DIfference between a dataset in a wide or a long format
DIfference between a dataset in a wide or a long format

Continue reading

NCG enrichment implemented in DOSE

I would like to give a big thanks to Guangchuang Yu, the author of many cool R libraries like GoSemSim and ggtree, for implementing a Network of Cancer Genes enrichment function in the DOSE R library.

The new function is called enrichNCG, and can be found in the github version of DOSE. You can use it to analyze a list of genes, and determine if they are enriched in genes known to be mutated in a given cancer type. For example, a random list composed by genes having an Entrez Id between 4000 and 9000 is enriched in genes mutated in sarcoma and leukemia:

If you have multiple sets of genes, you can also use the clusterProfiler library to compare them at the same time. Read this previous post for more examples of this functionality.

If you also have gene scores (e.g. a value for the expression or conservation of each gene), you can do a Gene Set Enrichment Analysis, which will give more importance to genes with higher scores:

You can also produce many nice plots. For example this is a cnetplot, in which each gene is connected to the terms related to it:

It is worth to mention that the DOSE package allows to calculate enrichment in the Disease Ontology database, which associates genes to disease terms. In my experience, for bioinformaticians Disease Ontology is more useful than OMIM, because it provides a clear association between genes and disease terms. If you use the raw OMIM data instead, you will have to text mine the descriptions and that can lead to a lot of noisy data.

Have a good enrichment with DOSE and NCG ūüėČ

Updates on docker and bioinformatics

My previous post on docker and bioinformatics received some good attention on Twitter. It’s nice because it means that this technology is getting the right attention in the bioinformatics community.

Here are a few resources and articles I’ve found thanks to the conversations in Twitter.

  • Performances of Docker on a HPC cluster – a nice article showing that running a NGS pipeline in a docker container costs about 4% of the performances. It’s up to you to decide whether this is a big or a small price to pay.
  • biodocker is a project by Hexabio aiming at providing many containers with bioinformatics application. For example, you can get a virtual machine with biopython or samtools installed in a few minutes. Update: this may have been merged with bioboxes (see discussion)
  • oswitch is a nice implementation of docker from the Queen Mary University of London, which allows to quickly switch between docker images. I like the examples in which they run a command from a virtual image and then return directly to another environment.
  • ngeasy, a Next Generation Sequencing pipeline implemented on Docker, by a group from the King’s College of London (I work in the same institute but I didn’t know them!).
  • a nice discussion on Biostar on how a reproducibility problem could be solved with Docker.
  • a Docker symposium planned for the end of 2015 here at King’s.
  • BioPython containers by Tiago Antao, including some ipython tutorials

Docker is another innovation for data analysis introduced in 2014. I am surprised by how many good things were released last year, including docker and the whole dplyr/tidyr bundle. Let’s see what 2015 will bring!

The Network of Cancer Genes database

In the last year I have been part of the team maintaining and updating the Network of Cancer Genes database, also known as NCG.

NCG logo
The Network of Cancer Genes database

The main focus of NCG is to provide a curated list of genes associated to cancer, obtained after a manual review of the literature, and classified by cancer subtypes. Moreover NCG annotates some system-level properties of genes associated to cancer, from their protein interactions to their evolutionary age, and from the presence of paralogs in the human genome to their function.

NCG is a small database and is not supported by any big consortium, but we do our best to fill our niche :-). The following list will describe you what you can get from NCG and how can it be useful to you.

Continue reading

Reproducible bioinformatics pipelines with docker

I have recently come across a nice article explaining what Docker is and how it can be useful for bioinformatics. I’ll leave you to the article for more details, but basically Docker is an easy way to define a virtual machine, which makes it very straightforward for other people to reproduce the results of an analysis, with little effort from our side.

For example, let’s imagine that we are just about to submit a paper, and that our main results are based on the Tajima’s D index from the data in 1000 Genomes. The journal may ask us to show how to reproduce the analysis: which files did we used as input? Which tool did we use to calculate the Tajima’s D?

In this case, a docker file may be like the following:

The first part of this docker file will set up an ubuntu virtual machine, and install all the software needed to execute the pipeline: tabix, vcftools, snakemake. The second part will  clone the latest version of the pipeline in the virtual machine, and then use tabix to download a portion of chromosome 22 from the 1000Genomes ftp. The third part runs the pipeline, by executing a snakemake rule.

You can run this docker container by running the following:

This will take quite a while to run, and will build a docker virtual image in your system. Afterwards, you can run the following:

This command will open an interactive shell in the virtual machine. From there you will be able to inspect the output of the pipeline, and eventually, if this pipeline were more complex than a mock example, run other rules and commands.

This system makes it very easy to provide an environment in which our results can be reproduced. It is also very useful if we work from more than one workstation – e.g. if we need to have the same configuration at home and in the lab.

Just a few more links on docker and bioinformatics: