Empirical studies of security and safety risks in GenAI systems used for protein design, genome editing, molecular synthesis, and other biotechnological applications.
Evaluation methods and benchmarks for stress-testing generative AI models in biological domains.
Theoretical foundations for identifying and mitigating risks associated with generative models in the life sciences.
Design and implementation of safety mechanisms and protective measures directly integrated into GenAI tools.
Analyses of current GenAI applications in biotech that expose safety gaps, with recommendations for improving resilience.