AI has the power to level up our health and lifespans like never before. Leaders in the biotech sector have been working hard to ensure the responsible and ethical use of AI.

If you’re in tech, you heard about the letter signed last week by thousands of technology experts calling for a pause in developing artificial intelligence language models. Cade Metz and Gregory Schmidt at the New York Times reported that “A.I. developers are ‘locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict or reliably control.’” Read the full article here.
Casey Newton, in his standalone and standout column Platformer, brought the argument home, writing: “Tech coverage tends to focus on innovation and the immediate disruptions that stem from it. It’s typically less adept at thinking through how new technologies might cause society-level change.” He adds, “And yet, the potential to dramatically affect the job market, the information environment, cybersecurity and geopolitics - to name just four concerns - should give us all reason to think bigger.”
Casey is right, as are Cade and Gregory and their fellow reporters who reliably keep us up-to-date with AI’s breakneck developments. Indeed, AI developers (and the companies hiring AI developers to integrate it into their offerings) are sprinting to capture market share. As a result, AI reporters are stretched between reporting their efforts and questioning what the impact may be. Information environment expert Aviv Ovadya proposes in Platformer that it is not just on journalists to do this. Expert advice should be sought before a tool is widely available, both to build resilience in our society and to pre-identify areas where the technology can be misused.This is the take on AI that most are missing - that the responsibility to test, evaluate, and question the impact of AI is on all of us. If you are in a leadership position in your sector, you need to address the emergence of AI now. Do not wait. Call on your peers for responsible use and share best practices to do so.
To understand why the time is NOW, just look at the headlines about the rapidly evolving language models:
Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change (Euronews)
Microsoft’s Bing A.I. is producing creepy conversations with users (CNBC, by Kif Leswing)
Leading to bans like this:
ChatGPT is temporarily banned in Italy amid an investigation into data collection (NPR, by Julianna Kim)
Versus the headlines about AI in medicine:
AI develops cancer treatment in 30 days, predicts survival rate ( Newyork Post, by Brooke Steinberg)
A Mayo Clinic AI program could soon be part of kidney transplant care (Post Bulletin, by Dene K Dryden)
Generative AI Makes Headway in Healthcare (The Wall Street Journal, by Belle Lin)
Where I will note an exception to the good headline rule is the biotech/health platforms using AI language models for treatments like therapy or self-diagnosis at home. Much more testing is needed here before we can be sure this delivers ethical care. And I am sure there are more exceptions to this headline rule. But overall, I do see a difference in how our progress is characterised.
I believe this marked difference is due to our sector-wide, years-long effort to enforce the careful, ethical and restrained use of AI. That environment allows unicorns like Insilico Medicine to bring treatments to trial in record time and with record efficacy. Or how startups like Haut.AI are revolutionising body analysis with anonymised, data-protected selfies. And not to mention how machine learning models are uncovering patterns across medical data and peer-reviewed research that will underpin genuinely personalised, custom, effective medical care.
And yes, like every sector, we also need to consider how AI will affect jobs in biotech. Biotech curricula must include AI instruction, and any program or medical school hoping to turn out tomorrow’s scientists should already be integrating it for their current cohorts. We will deserve the bad headlines if we use it as an excuse to lay off teams or increase profits.
This takes us back to impact. Suppose AI language models will impact geopolitics, information, cybersecurity, and jobs. What is on the way for biotech, which (as a sector overall) took a different approach? To emulate Casey’s summary, I would say:
AI’s potential to dramatically improve the biotech job market, the treatments we take, the diagnosis times we receive, and the medical data we generate - should give us all reason to act carefully lest we ruin decades of work that could add decades to our lifespans.
If you are a biotech expert (i.e., a founder, former founder, or scientist) with hands-on AI experience, reach out. We’d like to create an advisory board for AI language model biotech companies to benefit from (and share regular learnings for the broader AI community). If you are an AI expert looking to pivot to biotech as an advisor or investor, also reach out. Our LongeVC portfolio companies are breaking new ground in this field every day.
Read Garri's full article on LinkedIn here.