It's always exciting to hear of a new sequencing technology approaching fruition, and Oxford Nanopore's emergence from "stealth mode" at the AGBT meeting in Florida last week especially so (good coverage here). The technology is appealing as it measures a single DNA molecule, thus simplifying sample preparation, using integrated electrical sensors, substantially reducing instrument size and complexity compared with optical sensors. I would argue that these two attributes are the hallmarks of true 'third generation' (3G) sequencing.
Assuming that the technology lives up to the hype, how will bioinformatics be driven by 3G sequencing? We've already had to adapt to various high-throughput sequencing platforms spewing data with different read length/error models during the transition from Sanger to 2G (next-gen). Alongside the advances in bioinformatics algorithms and workflows, there has been a cascade of capability, with genomics core facilities now able to provide services that were previously the exclusive domain of genome institutes. Going by the "USB drive" sized prototype sequencers exhibited last week, with an expected price tag of $900, one can only assume that the cascade will continue with 3G, from core to lab, soon to reach researcher's desktop.
What will be the bioinformatics needs of "desktop sequencing"? A new breed of super-efficient GPU-exploiting desktop sequence analysis software? Or (and those familiar with this blog will be unsurprised by the suggestion) a Microsoft-esque cloud service for sequence management and analysis? The latter has a certain resonance when coupled with Oxford Nanopore's CTO (Clive Brown's) description to their technology as "sequencing on demand" (New York Times). How better to couple "sequencing on demand" than with "computing on demand".