Overview

1 The Drug Discovery Process

This chapter frames drug discovery as a long, expensive, high-risk endeavor and positions AI and machine learning as key enablers for making it faster, cheaper, and more reliable. The central challenge is searching staggeringly large chemical and biological spaces—on the order of 10^63 drug‑like molecules and ~10^5 human protein targets—while optimizing multiple properties such as efficacy and safety. The chapter highlights how modern computational approaches contribute across the pipeline: virtual screening and molecular property prediction to triage vast libraries, deep learning advances like AlphaFold to determine protein structures, generative chemistry to propose novel chemical entities that meet desired criteria, and reaction prediction/retrosynthesis to plan practical synthetic routes. Together these methods broaden the candidate funnel, prioritize better molecules earlier, and curb costly late-stage failures.

To ground these applications, the chapter builds core ML and cheminformatics foundations. It clarifies AI/ML/deep learning distinctions; emphasizes training vs. inference, generalization, and overfitting; and introduces supervised tasks (classification, regression) and unsupervised tasks (clustering, dimensionality reduction, representation learning, generative modeling). It explains how molecules are represented for computation, from SMILES (including canonical and isomeric variants that encode stereochemistry) to fingerprint descriptors, noting why stereochemistry can critically affect biological outcomes. Using RDKit, the chapter demonstrates practical steps—creating molecular objects, computing ECFP fingerprints, exploring chemical space with PCA, and fitting simple classifiers (e.g., logistic regression) to relate structure to pharmacological classes—illustrating how data, features, and models link structure to function.

The chapter then outlines the modern pipeline: target identification and validation; hit discovery via computational and high-throughput screening; lead identification with confirmatory assays and early ADMET profiling; and lead optimization to improve potency, selectivity, bioavailability, safety, and overall PK/PD properties. Candidates that pass preclinical evaluation advance to clinical trials: Phase I for first-in-human safety and dosing, Phase II for preliminary efficacy and continued safety, and Phase III for definitive, large-scale assessment against standards of care. It also notes expedited pathways for pressing medical needs (first‑in‑class, orphan, breakthrough, accelerated). Across these stages, AI and deep learning support target assessment, screening, de novo design of candidates, synthesis planning, and protein structure prediction—aiming to expand search coverage, reduce attrition, and increase the novelty and quality of molecules that reach the clinic.

Drug discovery can be thought of as a difficult search problem that exists at the intersection of the chemical search space of 1063 drug-like compounds and the biological search space of 105 targets.
Using AI to guide early prediction and optimization of drug-like molecules, we can broaden the number of considered candidate molecules, identify failures earlier when they are relatively inexpensive, and accelerate delivery of novel therapeutics to the clinic for patient benefit.
In virtual screening, we start with a large, diverse library of compounds that we can filter using a predictive model that has learned to predict what properties each compound has. Our predictive model has learned how to map the chemical space to the functional space. If the compound is predicted to have optimal properties, we carry it over for further experiments. In de novo design, we start with a defined set of property criteria that we can use along with a generative model to generate the structure of our ideal drug candidate. Our generative model knows how to map the functional space to the chemical space.
New drugs per billion USD of R&D reflects a downward trajectory. You may have heard of Moore’s Law, which is the observation that the number of transistors on an integrated circuit doubles approximately every two years. Moore’s Law implies that computing power doubles every couple of years while cost decreases. Eroom’s Law (Moore spelled backwards) is the observation that the inflation-adjusted R&D cost of developing new drugs doubles roughly every nine years. Eroom’s Law reflects diminishing returns in developing new drugs, including factors such as lower risk tolerance by regulatory agencies (the “cautious regulator” problem), the “throw money at it” tendency, and need to show more than a modest incremental benefit over current successful drugs (the “better than the Beatles” problem). The plot was constructed with data from Scannell et al., which discusses the trend in greater detail [6].
If we know both the structure of our ligand or compound and the target, we can use structured-based design methods. If we only know the ligand structure, we are restricted to ligand-based design methods. Alternatively, if we only know the target structure, we can use de novo design to guide generation of a suitable drug candidate.
Artificial intelligence, ML, and deep learning are all related to each other.
Example pairs of isomeric SMILES.
Example drug molecules for each USAN stem classification within our data set.
Chemical space exploration in a reduced, 4-dimensional space.
Decision boundary of our logistic regressor for classifying “-cillin” (left) and “-olol” (right) USAN stems. For each plot, colored samples belong to the positive class and uncolored samples belong to the negative class.
We can breakdown drug design into target identification and validation, hit discovery, hit-to-lead (lead identification), lead optimization, and preclinical development. Once a drug candidate has progressed to the drug development stage, it will need to pass multiple phases of clinical trials testing safety and efficacy prior to submission to and review by the FDA and launch to market.
We can break down the ADMET properties into the following broad descriptions. Absorption refers to the process by which a drug enters the bloodstream from its administration site, such as the gastrointestinal tract for oral drugs or the respiratory system for inhalation drugs. Distribution pertains to the movement of a drug within the body once it has entered the bloodstream. Metabolism refers to the biochemical transformation of a drug within the body, primarily carried out by enzymes. Metabolic processes aim to convert drugs into more polar and water-soluble metabolites, facilitating their elimination from the body. Excretion involves the removal of drugs and their metabolites from the body. Toxicity assessment aims to evaluate the potential adverse effects of a drug candidate on various organs, tissues, or systems.
We can segment the early drug discovery pipeline into four main phases: target identification, hit discovery, hit-to-lead or lead identification, and lead optimization. Target identification designates a valid target whose activity is worth modulating to address some disease or disorder. Hit discovery uncovers chemical compounds with activity against the target. Lead identification selects the most promising hits and lead optimization improves their potency, selectivity, and ADMET properties to be suitable for preclinical study.
In virtual screening, we conducted our search across a chemical space consisting of an enormous set of molecules. In de novo design, we are still conducting an (informal) search, just not across the chemical space. We are now searching across the functional space of potential molecular properties. If our model “learns” which section of the functional space maps to molecules that have ideal binding affinity and safety, then perhaps it can reverse-engineer novel molecule structures in the chemical space that match our functional criteria.
Preclinical trials evaluate drug candidate safety and efficacy on model organisms. Phase I clinical trials evaluate drug candidate safety in its first exposure to humans. Phase II and Phase III clinical trials continue to collect data on safety while measuring drug candidate efficacy on larger groups of patients. The pass rate of our lead compounds decreases drastically as they progress beyond preclinical stages, along with an increase in the associated time to test them.

Summary

  • Developing therapeutics entails a long, arduous process. Traditional development from ideation to market is costly (magnitude of billions of dollars), lengthy (10 to 15 years), and risky (attrition of over 90%). Through advances in AI, we can discover cures that have better safety profiles, address medical conditions or diseases with low coverage, and can reach patients quicker.
  • Drug discovery can be thought of as a difficult search problem that exists at the intersection of the chemical search space of 1063 medicinal compounds and the biological search space of 105 targets.
  • Applications of AI to drug design include molecule property prediction for virtual screening, creation of compound libraries with de novo molecule generation, synthesis pathway prediction, and protein folding simulation.
  • ML is a subfield of AI that enables computers to learn from and make decisions based on data, automatically and without explicit programming or rules on how to behave. Example ML algorithms include logistic regression and random forests. Deep learning is a subfield of ML that uses deep neural networks to extract complex patterns and representations from data.
  • We can segment the early drug discovery pipeline into four main phases: target identification, hit discovery, hit-to-lead or lead identification, and lead optimization. Target identification designates a valid target whose activity is worth modulating to address some disease or disorder. Hit discovery uncovers chemical compounds with activity against the target. Lead identification selects the most promising hits and lead optimization improves their potency, selectivity, and ADMET properties to be suitable for preclinical study.
  • Popular, well-maintained chemical data repositories include ChEMBL, ChEBI, PubChem, Protein Data Bank (PDB), AlphaFoldDB, and ZINC. When using a new data source, learn how it was assembled and how quality is maintained. Garbage data in, garbage model out. See “Appendix B: Chemical Data Repositories” for more information.

FAQ

What’s the difference between drug discovery and drug development?Drug discovery covers selecting a biological target, finding hits that bind to it, narrowing those hits to leads, optimizing leads (potency, selectivity, ADMET), and preclinical testing. Drug development begins after preclinical success and includes human clinical trials (Phases I–III), regulatory review (e.g., FDA submission), and launch.
Why is drug discovery such a difficult search problem?The chemical space is astronomically large—roughly 10^63 drug‑like molecules—while the biological target space spans ~10^5 human proteins and variants. Experimental screens test ~10^5–10^7 compounds/day, which is negligible at this scale. Costs of bringing a drug to market can reach $1–3B over 10–15 years with high clinical failure rates, making smarter, faster triage essential.
How does machine learning add value in early discovery?ML predicts molecular properties (e.g., binding affinity, toxicity) directly from structure to prioritize candidates before expensive experiments. It enables virtual screening at scales of ~10^9–10^12 compounds/day, identifies failures earlier, focuses wet‑lab resources, and can propose optimizations—cutting time and cost across the pipeline.
What is virtual screening, and how do physics‑based and ML‑based approaches compare?Virtual screening prioritizes compounds computationally. Physics‑based methods (docking, molecular dynamics) simulate interactions using force fields but are computationally intensive. ML‑based screening learns from data to predict properties directly, offering orders‑of‑magnitude speedups at the cost of relying on training data quality and coverage.
What is generative chemistry (de novo design) and why does novelty matter?Generative chemistry asks models to produce new molecular structures that meet specified property targets—searching in “functional space” and mapping back to “chemical space.” Novel compounds can address unmet needs and offer IP advantages, countering trends like Eroom’s Law and “me‑too” drugs. Deep learning helps by learning task‑specific features beyond hand‑engineered descriptors, improving novelty and quality.
Who are targets and ligands, and what are agonists, antagonists, and inhibitors?Targets are biomolecules (often proteins) whose activity we aim to modulate. Ligands bind targets and can act as agonists (activate a response), antagonists (block response), or inhibitors (reduce enzyme activity). Off‑target binding can cause side effects, so selectivity is a key design objective.
What are ADMET, PK, and PD, and how do they guide lead optimization?ADMET covers Absorption, Distribution, Metabolism, Excretion, and Toxicity—determinants of a drug’s behavior and safety in the body. PK (what the body does to the drug) and PD (what the drug does to the body) frame these properties. Lead optimization tunes efficacy, potency, selectivity/safety, and bioavailability to improve clinical success odds.
What is retrosynthesis, and how can AI help make synthesis practical?Retrosynthesis plans routes from a target product back to simpler precursors, but each step can branch into ~10^4 transformations, causing combinatorial explosion. Data‑driven models help rank plausible disconnections and routes, enabling faster, cheaper synthesis planning for both novel and existing drugs, and supporting process chemistry for scale‑up.
Why does protein structure prediction (e.g., AlphaFold) matter for drug discovery?Protein function is tightly linked to 3D structure, yet experimental determination lags far behind sequence data. Accurate structure prediction accelerates target understanding, binding‑site identification, docking, and mechanism hypotheses—compressing timelines and increasing the quality of structure‑based design decisions.
How are molecules represented for ML, and what tools are commonly used?Molecules can be represented as SMILES strings and converted into numerical features such as fingerprints (e.g., ECFP6 1024‑bit vectors). Canonical SMILES standardize a unique text form; isomeric SMILES capture stereochemistry (critical for activity). Toolkits like RDKit integrate with ML libraries to compute descriptors, visualize structures, reduce dimensionality (e.g., PCA), and build simple models (e.g., logistic regression) for rapid prototyping.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Machine Learning for Drug Discovery ebook for free
choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Machine Learning for Drug Discovery ebook for free