an evaluation of chatgpt and bard (gemini) in the context of biological knowledge retrieval

posted on july 12, 2024   by ron caspi, biocurator, sri international

ron caspi takes us behind the scenes of their latest publication 'an evaluation of chatgpt and bard (gemini) in the context of biological knowledge retrieval' published in access microbiology.

ron caspi ai chat gpt blog main.jpg
istock/supatman

my name is ron caspi. i received my phd in marine microbiology from the scripps institution of oceanography back in 1996. after a postdoc and a few years in biotech i joined peter karp’s team at sri international. this small group, consisting of computer scientists and biologists, develops the biocyc web portal, which combines >20,000 pathway/genome databases (mostly for bacteria) with a set of powerful bioinformatic tools designed to be used by biologists (no programming skills required). my role in the group is that of a biocurator. i read a lot of papers and summarise metabolic information into the metacyc database, the largest curated database of experimentally elucidated metabolic pathways and enzymes, which are gathered from all domains of life.  i am also a member of the iubmb and iupac nomenclature committees, as well as the enzyme commission, which classifies enzymes.

over the years i curated thousands of pathways and enzymes in the metacyc database, as well as many organism-specific databases. curation is a time-consuming process, so when chatgpt started gathering attention, i was excited about the possibility of using it to enhance my curation. alas, when i actually tried it, i found that i was often receiving incorrect information. when i mentioned it to my group, my director suggested conducting a more controlled experiment to evaluate the performance of chatgpt (for my specific purposes). so, as i continued my curation work and new questions emerged, i occasionally consulted chatgpt, evaluated the response and kept records of my interactions. when google introduced its own chatbot, bard (now gemini), i asked it the same questions that i asked chatgpt. at some point i felt that i had enough information to share my observations with others.

in the paper, ‘an evaluation of chatgpt and bard (gemini) in the context of biological knowledge retrieval’, i presented my results. each of the chatbots was presented the same set of 8 questions:

  1. which enzymes produce 4-methylumbelliferyl glucoside?
  2. what type of quinone is found in staphylococcus?
  3. what is the function of mrt4 in yeast?
  4. what are the three nifs family proteins found in e. coli?
  5. does e. coli contain any hd-gyp domain-containing proteins?
  6. who named the curli protein of e. coli?
  7. what is the rbcx protein?
  8. can you give an example of a cyanobacterial enzyme that contains ubiquinone in its name?

note that the last question is a bit of tricky, since cyanobacteria do not produce ubiquinone. however, annotation pipelines occasionally mislabel cyanobacterial proteins with ubiquinone, and i was trying to identify such cases.

the performance of the chatbots was evaluated using a simple system: a fully correct answer received 3 points; a mostly correct answer received 2 points; a mostly incorrect answer received 1 point; and a completely incorrect answer got no points at all.

the result: out of a maximum score of 24, chatgpt scored 5 and bard scored 13. not great. the problems included missing information that is readily available on google or pubmed, providing incorrect information and sometimes producing a mixture of correct and incorrect information. this makes it difficult for the user to know what could be trusted and what could not. the tools were also inconsistent, providing different answers to the same question when i contested the validity of a given answer. since the answers provided by these tools cannot be trusted currently, the time a user would need to spend verifying the information would not be significantly less than the time it would take to research a topic by other means.

perhaps the results are not that surprising, since even chatgpt agrees. when asked about it, it said “for specific and up-to-date scientific information, established scientific journals, databases, and subject-matter experts remain the preferred avenues for trustworthy data”…