Deciding which project to progress when the technologies and applications are diverse is a challenge that faces many innovation managers. In this guest blog Mark Richardson, CEO of the National Biofilms Innovation Centre, discusses how he resolved this problem and introduced an objective scoring system for projects.
As we are all learning right now in the COVID-19 outbreak boldly saying “following the science” or being “data driven” doesn’t always lead to clear cut answers. As the amount of information becomes more complicated and specialised, it makes effective decision-making increasingly difficult for managers in R&D. They have to deal with diverse and often conflicting information in order to come to a best possible view of the way forward.
This was a situation I also faced when I joined the National Biofilms Innovation Centre (NBIC). We have the honour of being able to run funding calls on behalf of UK Governments Research and Innovation Agency (UKRI) for collaborative projects between UK companies and our partner universities (which now number 52) to deliver translational projects which build upon the use of the UKs science investment over several decades which has been aimed at controlling or exploiting Biofilms. The project applications are very diverse and specialised so how could we select the most promising?
Biofilms a challenge and an opportunity
Biofilms are communities of microorganisms which attach to a surface in an extracellular matrix hence triggering a change in metabolism and behaviour from simple suspensions of microorganisms.
These biofilms can then cause both problems and opportunities in a range of industrial sectors.
A market study that we in NBIC carried out in 2017 suggested they impact on global markets worth $5tn.
This is because biofilms can have a range of effects:
– Cause corrosion on metal surfaces (eg in oil pipelines)
– Biofouling on ships
– Cause antibiotic resistance and infection in humans and animals
– Linked to food poisoning as they make cleaning of food production surfaces difficult
– Cause dental caries …..and much much more!
Yet they can also be harnessed successfully to create energy, consume waste, change the microbial flora of the gut, treat sewage or water and be a source of high value chemicals.
With such a diversity of sectors and projects, how do we compare project applications fairly and transparently for impact and benefit?
To design an effective process for prioritising the diverse science, technology and industry portfolio of complex projects, I turned to my industrial background of considering a number of different criteria in selecting winning projects:
– Is there an unmet need?
– Is there an idea or project capable of satisfying this need built on sound science?
– Is there a strong team (are the academic and industrial partner truly engaged), plan and understanding of risks and their mitigation?
– What’s the commercial opportunity?
– Have the group resolved the Intellectual property situation and ownership?
– What would they do next, is this likely to be progressed further if the project has a successful outcome?
Essentially, we asked what the business case is for progressing the science in this project.
However, this range of criteria can lead to a diversity of complex information which is hard for any one person to compare and make sense of objectively. There is so much knowledge and specialism that no one person can cover it all. This is getting harder as specialisms subdivide as its impossible for one expert to keep abreast of the state-of-art. So, who decides which are the best applications and how?
I decided to reach out to the Biofilm community in the UK to help.
This is a group of domain experts and practitioners who think often of the problems and opportunities biofilms offer. They engage with us and come to our events. I determined that projects were to be assessed by a group of peers. Those applied scientists and experts working in industry (eg from Food, Marine, Health, Household products, Oil and Gas, Waste and Wastewater) and the academics mastering the science across the researchers in our 52 partner Universities who understand the state of the science. These people have strived, sometimes for many decades, to translate technology and science and so they have a realistic idea of what “good” might look like.
We were fortunate to assemble around 80 such volunteers willing to assess projects against a defined set of criteria based on a straightforward but multi-criteria application form which had to be completed by the joint applicants (academia and industry).
Ten point weighted criteria for project selection
The assessors used a 10-point scale for marking against these weighted criteria as well as giving narrative feedback, and of course each project had at least one industrial and one academic assessor. For example, one question asks the applicants “What is the status of the Intellectual Property within (IP) the project (eg does it exist and who owns it)? Do you currently have freedom to operate (FTO)? How will ownership of IP generated during the project be handled?”
The assessor then judges if the IP status has been described adequately in terms of background IP, ownership, freedom to operate and if the applicants have clear understanding of and how they will deal with any arising IP. A score of ten would be where a strong IP arrangement already exists between the parties that matches UKRI guidelines or an agreement exists in principle. A score of one might be for an answer “we intend to discuss this once the project is started!”
Reviewers of course may, and do, sometimes differ in views on the same questions on the same application. They, like all of us, rely on judgments that are looked at through the lens of their own experience. How do we resolve this potential dichotomy?
I decided that following us receiving the assessor scores and feedback we would assemble a panel of 11 experienced experts composed of five industrial R&D professionals and five academics along with a completely independent chairperson. These members represent a range of sectors, specialities and experiences but all have worked at the difficult interface between academic and industry for many years. Each then takes ownership of a group of applications and aims to look at the assessors scores, create their own assessment and then present to the whole panel a reasoned view on the project such that the panel then award a third score.
The group discuss and review all projects (approximately 50 per competition) and develop a ranked list based on overall quality. To achieve this they need to resolve as a team the complex subtleties of making these comparative judgments across business and science. It’s hard work for them but consensus emerges.
This whole process takes up to five months from opening to award. The call is open for nine weeks to receive applications followed by three weeks for assessors to review projects and then two weeks for the panel members to assess the overall scores. Then we hold a panel meeting to come to final decisions with awarded projects being communicated very soon afterwards. Each project successful or unsuccessful receives detailed feedback. Over our three calls we have received 144 applications of which 65 have been awarded.
So what is the key for effective decision making in R&D?
Of course at the heart is objective data, relentless use of good tools for assessment and an independent, objective mind. But at the very end of the process then the application of experience, judgement and debate can still be vital to achieve a balanced outcome and one that is not viewed as either purely “process driven” (hence at the risk of an imperfect process) or purely “gut feel” and being seen as ad hoc “seat of the pants” behaviour.
Based on this transparent, fair process I am convinced we have been able to create efficiently and effectively a portfolio of 65 diverse projects which those most knowledgeable, experienced and objective across both industry and academia see as being those which will best drive translation of knowledge and technology from the University Sector to Industry for societal benefit.
To summarise – decision-making in R&D is not a pure science but there are best practice processes to learn from which can balance judgment, experience and data and very many well-tried tools that can be modified to fit.