Acknowledgements: Based on work done with the Merck Innovation Centre, Darmstadt.
From fiction like Mary Shelley´s Frankenstein, to real breakthroughs like the cloning of Dolly the Sheep, the prospect of being able to engineer biology has long been a source of fascination and endeavor for artists and scientists alike. The processes, techniques, and technologies involved are incredibly complex, yet investments being made in synthetic biology projects have been significant and look to continue growing. Revenues from SynBio products and services look like they will be 3.9 billion USD in 2016, 93% of which are accounted for by applications in healthcare, specialty chemicals, and life science R&D.
So, what is Synthetic Biology exactly? Essentially it is the application of engineering philosophy to molecular biology, in order to make biology easier to engineer. This is the foundation of Biotechnology 2.0 whereby scientists are systematically empowered in the creation of new bio-molecules, pathways, cells, tissues and organisms that do not exist in nature; or to redesign existing systems.
The turn of the millennium is a convenient origin point to use for the field. The preceding decade had seen dramatic improvements in computational tools, allowing a scale-up of molecular biology and the rise of systems biology. Attempts by systems biology to map cellular networks developed the view that these networks could be organised into discernible functional modules.
As this feature is shared by many engineered systems, the implication was that in complement to the top-down approach of traditional genetic manipulation, a bottom-up approach could draw on the expanding list of molecular 'parts' to forward-engineer regulatory networks. And that this could form the basis of a formalised biological engineering discipline, which apply engineering precepts like a process of rational design and iterative optimisation, and in particular of abstracting away complexity to facilitate that design. Part and protocol standardisation are a key part of this by enabling predictable function and interoperability, as is part modularity and orthogonality in facilitating the plugging-in and –out of parts, devices, and entire sub-systems.
Contrast this with the traditional top-down approach to genetic manipulation whose legacy is discovery-driven and requires that we limit ourselves to one modification at a time to minimise the system-wide repercussions. Implementing what we can of engineering precepts is thought necessary to move beyond to true rational design. In this context parts, devices, and subsystems would refer to genes, to gene circuit components like logic gates, and to operons respectively. This application of engineering philosophy is the clearest axiom of SynBio.
The challenges to progress in SynBio are many and varied. The sheer complexity of biological systems makes the engineering workflow relatively slow and expensive. As a result, the same challenges apply to troubleshooting during the development process. In addition, biological system complexity cannot easily be reduced to purely orthogonal biobricks that can be plugged in and out, although developments in informatics may mitigate this.
In fact the last years have seen a centralisation of synthetic biology research into dedicated foundries and other specialised facilities that incorporate automated workflows for gene assembly, transformation, cloning, selection, and testing. These workflows take advantage of modular hardware (e.g. HighRes Bio); parallelised and miniaturised reaction volumes like what is enabled by acoustic liquid handling (e.g. Labcyte), easy DNA editing via CRISPR-Cas9, cheaper synthetic DNA from vendors leveraging massive synthesis and assembly parallelisation, and of course ever cheaper and more accessible sequencing. To use these facilities at the scale they enable, the workflows need to be interfaced with informatics language that can describe molecular biology workflows and gene circuit designs, and thus apply machine learning and design of experiment to protein and organism engineering.
Part of the reason for centralisation is the current difficulty of justifying the set-up time/CAPEX cost/automation expertise requirements for implementing individual projects in individual labs this way. But this activation energy cost keeps labs stuck in the manual trial & error style of research where pieces of automation are force-fitted in around manual workflows, which is severely limiting. It´s just one gel after another trying to change this cation concentration or add another reaction factor and see what happens, with 70% of the time spent manually preparing reagents, pipetting, running the gel, etc.
Computer aided biology research supported by lab automation should be liberating scientists from manual work so they can focus on the experimental design that accesses on-demand execution for design-build-test cycles and rapid experimental scale up. Those automated protocols also limit manual errors and contamination, which make the data sets much more reliable. Biology problems are so high dimensional, trying to break them up into iterative single variable experiments that can be digested by a manual researcher is just self-undermining given what engineering and informatics abilities are available for use.
To make an analogy, no human mind designed the 2.6 Bn transistors in the i7 chip. Chip designers aren´t manual anymore, they use programming language to specify desired behaviour in a textual programming language which then derive the detailed physical design and automated the chip manufacture process. No doubt this is one reason why the founders of the world’s largest technology companies, including Paypal, Google, Microsoft, Yahoo, and Twitter, are investing in SynBio foundries, cloud biotech labs, organism engineering, gene editing therapeutics and other areas.
Comments