Discovery: Definition
March 15, 2020 · Updated Aug. 2, 2024 2024-08-02
- Email article
- Share on LinkedIn
- Share on Twitter
In This Article:
Introduction, when is a discovery needed, common activities in discovery, people involved, the outcome of a discovery.
Discovery is a preliminary phase in the UX-design process that involves researching the problem space, framing the problem(s) to be solved, and gathering enough evidence and initial direction on what to do next.
Discoveries are crucial to setting design projects off in the right direction by focusing on the right problems and, consequently, building the right thing.
A discovery should result in the following:
- Understanding of users: Through user research, the project team achieves an understanding of who the users are and how they are affected by a particular problem, as well as what they need, desire, and value from a solution (and why).
- Understanding of the problems to be solved and of the opportunities: Through investigative work, the team understands how and why the problem(s) occur, and what effect the problem has on users, as well as on the organization. It understands the magnitude of the problem and opportunities for the organization, product, or service.
- Understanding of existing constraints : By learning about current business processes, solutions, and available compatible technologies, the team can identify feasible solutions to explore after discovery.
- Shared vision: During discovery, the team works with stakeholders to understand overarching business objectives and desired outcomes and get answers to questions such as “What do we want to achieve?” or “What does success look like?” This approach, in turn, focuses the team on the problems (and later the solutions) that will have the greatest impact on that outcome. The team should also have an idea of what to measure going forward, to understand whether the solution is working towards the desired outcome.
Well-done discoveries ensure that any solutions proposed later are desirable to users, viable for the organization, and feasible with the available technology.
A discovery starts broad and requires team members to investigate the context of the problem. The double-diamond diagram introduced by the UK Design Council — and reproduced below — illustrates the high-level process of a discovery: first, the team expands its understanding of the problem by researching its full context; armed with this knowledge, the team agrees on what the problem is, before moving to the next phase of ideating and testing in the Develop stage.
A discovery is needed anytime there are many unknowns that stop a team from moving forward. Moving forward on only assumptions can be risky, as the team may end up solving a problem that doesn’t really matter — wasting time, money, and effort.
A discovery might also be needed when the team is not aligned with what it wants to achieve .
Discoveries are often carried out differently depending on the type of problem the team needs to investigate. Below are some examples of instigators:
- New-market opportunities . If an organization is looking to explore where to expand its product or service offerings, a discovery is often needed. The discovery might involve researching a new audience, performing competitive reviews, and investigating whether the size of the opportunity warrants entering the market.
- Acquisitions or mergers. When organizations merge, it’s likely that systems, processes, and tools will also need to be consolidated. A discovery could focus on common problems faced by each organization, in order to find a common solution.
- New policy or regulation . This instigator is especially relevant for government organizations or organizations that operate in an environment affected by regularly changing regulations. Such a discovery would involve studying the populations affected by the change, reviewing the regulation to understand it, and assessing how business operations must change to support the new regulation.
- New organization strategy. This driver of change comes internally from the organization (unlike new regulations, which often originate externally). For example, during my time in the UK Government, one government-wide strategy was to become ‘digital by default’, which meant moving away from expensive, paper-based processes to efficient (digital) ones. Discoveries in numerous government departments focused on understanding the needs of their users, as well as the extent of paper-based processing, in order to ensure that a shift to digital was, in fact, efficient and user-centered.
- Chronic organizational problems. Perhaps sales have been low this year, or satisfaction has been low for several quarters. Often organizations find themselves simply focusing on symptoms (e.g., adding webchat), rather than on causes. A discovery involves inward- as well as outward-facing research to understand why these problems occur and examination into causes to identify the greatest opportunities for improvement.
There are many different types of activities that could be carried out in a discovery. Here are a few that are performed in most discoveries.
Exploratory Research
Research helps us learn new things about a domain. This type of research is known as generative or exploratory because it generates new, open-ended insights. By carrying out this research, we learn about the problem space (or the opportunity space). Discovery does not (typically) involve testing a hypothesis or evaluating a potential solution.
At the beginning of a discovery, the research topic might be extremely broad, whereas later it narrows in on those aspects of the problem space that have the most unknowns or present the greatest opportunities.
Common exploratory research methods include user interviews , diary studies , and field studies with a representative group of users. Surveys can also be used to gather data from a larger group of users; the data can be triangulated with qualitative insights from other methods.
Stakeholder Interviews
Stakeholders often have unique knowledge, insight, and data about internal, backstage processes and the users who interact with them. Interviewing stakeholders provides an additional layer of insight that helps the team understand the scale of the problem and the viability of later solutions.
Interviewing key people in the organization can provide you with an understanding of:
- Key business objectives of the organization, individuals, or teams (These are helpful to determine if and how these broader goals tie-in to the goals of the project.)
- Data and insights about how problems affecting users impact backstage work (such as inquiry type and volume, additional processing)
- Solutions they’ve tried before that have or haven’t worked, how they implemented them, what other problems they caused, as well as why they were removed (if applicable)
In addition to interviewing stakeholders, including key stakeholders in the discovery process or having them weigh in throughout not only facilitates further buy-in, it also provides more insights.
Workshops align team members and stakeholders and are a useful tactic for discovery. Some workshops commonly used in discoveries include:
Kickoff workshop. A kickoff workshop occurs at the beginning of the discovery and aims to create alignment on the objective of the discovery, and when it will be complete. It is normally attended by the client or key stakeholders who are invested in the discovery, as well as by the discovery team itself. It can also include agreement on the roles and responsibilities of each team member during the discovery.
Assumption-mapping workshop. Many teams bring in experts and conduct data-gathering activities in a workshop. They question the validity of certain ‘facts’ and identify the deep-rooted assumptions that need further exploration. Part of this workshop can also include prioritizing assumptions in terms of risk to the project’s outcome. The riskiest assumptions should be prioritized in terms of research activities.
Research-question–generation workshop. This workshop is similar to the assumption-mapping workshop, and the two are often combined; the team discusses what the unknowns are and drafts research questions. The research questions can be prioritized in terms of their importance and how well they will work to gather the knowledge needed to move forward.
Affinity-diagramming workshop. After performing exploratory user research — such as user interviews, contextual inquiry, and diary studies — insights and observations are transferred to sticky notes and the team works to affinity-diagram them to uncover themes around problems, causes, symptoms, and needs.
Mapping workshop . The team plots insights from user research and other investigations into a map of the problem space, customer experience, journey, or service. The map is used to create alignment, to identify gaps that need further research, and to highlight major opportunities.
Ideation workshop. This workshop takes place at the end of the discovery. Once the team has performed the necessary research, the team crafts ideation statements like How-Might-We’s based on the problems or insights it has uncovered and uses them to generate solution ideas to explore going forward.
Discovery is best performed by a small multidisciplinary team Ideally, team members are dedicated full-time to the project and are collocated when working in person. Depending on the scale of the problem and the discovery activities, the number of people involved and the type of roles they play may vary. However, it’s a good idea to keep the team small; between 3 and 7 members is ideal.
Key roles include:
- Someone who can do research : A UX researcher or UX designer needs to plan and carry out user research.
- Someone who can facilitate or lead the team : Although self-organizing teams are always best, a team leader is helpful when team members are new to discovery or the team is large. There are many titles that could fill this role, including product manager, project manager, delivery manager, service designer, UX strategist. The team leader will need to facilitate workshops , ensure that the team communicates well, and maintain alignment throughout the discovery process.
- A sponsor or owner: Someone from the organization needs to own the project. This person often has a lot of domain and subject-matter expertise, as well as knowledge about who needs to be consulted. The owner should be influential enough to get the discovery team access to other people, teams, or data.
- Someone technical: A developer or a technical architect who understands enough technical detail to be able to speak to engineers is needed in order to explore available technologies, their capabilities, and constraints.
In addition to these roles, there could be many others, including business analysts who research business processes, visual designers who explore branding, or interaction designers who work on developing appropriate design principles. It’s best if the team agrees to specific roles and responsibilities at the beginning of the discovery.
At the end of the discovery, the team has a detailed understanding of the problem and what outcomes to aim for, as well as where to focus its efforts. They may also have some high-level ideas for solutions that they can take forward and test. In some cases, the end of a discovery might be a decision not to move forward with the project because, for example, there isn’t a user need.
Discovery isn’t about producing outputs for their own sake. However, the following might be produced to help the team organize learnings about the problem space and users:
- A finalized problem statement : a description of the problem backed up with evidence that details how big it is and why it’s important
- Finalized maps, such as a user-journey map or service blueprint
- User-needs statements
- High-level concepts or wireframes (for exploring in the next phase)
A discovery is a preliminary phase of a design project. It can be initiated by many different kinds of problems, involve different-size teams, and include many research or workshop activities. However, all discoveries strive to gain insight into a problem space and achieve consensus on desired outcomes.
UK Design Council’s Double Diamond Model: What is the framework for innovation? Design Council's evolved Double Diamond .
Related Courses
Discovery: building the right thing.
Conduct successful discovery phases to ensure you build the best solution
Personas: Turn User Data Into User-Centered Design
Create, maintain, and utilize personas throughout the UX design process
Assessing UX Designs Using Proven Principles
Analyze user experiences using heuristics and assessments
Related Topics
- Design Process Design Process
- Research Methods
Learn More:
Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=5bzYyxVGgnM
Are You Doing Real Discoveries?
The 6 Rules of Design Thinking
Tim Neusesser · 4 min
MVP: Why It Isn't Always Release 1
Sara Paul · 4 min
Discovery Kick Off Workshops
Maria Rosala · 4 min
Related Articles:
Design Thinking: Study Guide
Kate Moran and Megan Brown · 4 min
User-Centered Intranet Redesign: Set Up for Success in 11 Steps
Kara Pernice · 10 min
Card Sorting: Pushing Users Beyond Terminology Matches
Samhita Tankala · 5 min
User Need Statements: The ‘Define’ Stage in Design Thinking
Sarah Gibbons · 9 min
Benchmarking UX: Tracking Metrics
Kate Moran · 3 min
What a UX Career Looks Like Today
Rachel Krause and Maria Rosala · 5 min
- Design & UX
Discovery research is a UX essential — Here’s how to get started
Georgina Guthrie
April 13, 2022
Design is all about problem-solving. Often, these are quite big problems with complex answers — so designers break the process down into different stages to make it more manageable. These stages generally focus on research, design, and testing/development.
Today, we’ll take a closer look at how designers can move between the first two stages to gather information and test their ideas before fully launching into the development phase.
With this back-and-forth approach, it’s possible to analyze and make informed decisions about which findings to take forward. Discovery research ultimately leads to a final product that meets users’ needs— not just the designer’s assumptions .
What is discovery research?
Discovery research (also called generative, foundational, or exploratory research) is a process that helps designers understand user needs, behaviors, and motivations through different methods, such as interviews, surveys, and target market analysis.
Discovery research is related to product research but involves a broader analysis. Whereas the former deals with all kinds of research — for brands, innovations, products, and more — the latter is solely focused on the product.
How does discovery research help with design?
Discovery research helps designers understand user needs, behaviors, and motivations, which form the basis of key design decisions.
Conducting this early-stage analysis also ensures that designs are based on real user needs rather than the designer’s assumptions. This approach leads to products that feel more like tailor-made creations rather than a broad approximation of what users want.
Finally, it saves time and money by revealing potential problems before they become bigger (and more expensive) issues further down the line.
What are the main goals of discovery research?
- Understanding your users better : the first and most important goal of discovery research is to help you get under the skin of your users. By understanding user goals and pain points, you can design solutions that address their needs.
- Improving design decisions : the second goal of discovery research is to improve design decisions. Instead of simply creating a product the design team thinks is cool, you can develop a product roadmap based on relevant data.
- Save time and money : testing before leaping right into development means you can spot potential problems and work through them, investing time and resources wisely.
- Creating a shared vision : discovery research can create a shared vision for a project among the design team. Because the research provides a common understanding of user needs, design teams can more easily agree on what to prioritize.
What are the benefits of using both qualitative and quantitative research methods?
Qualitative research is based on open-ended questions and provides insights into people’s attitudes, opinions, and feelings. Typically, this research involves interviews, focus groups, or surveys. Quantitative research, on the other hand, uses closed-ended questions and focuses on hard data, including:
- Performance analytics : websites and apps contain a wealth of numerical data. Google Analytics can show you everything from the number of page views to time spent on a page.
- Target market analysis : demographic research looks at characteristics such as the age, gender, and location of your target market. It’s often collected through surveys and distributed via email.
The benefits of using qualitative and quantitative research methods are twofold.
Qualitative research is often viewed as ‘creative’ and exploratory, while quantitative research is considered more ‘scientific’ and focused. Both types of research reveal something different, each with its strengths and weaknesses.
Qualitative research is good for exploring new ideas and getting an in-depth understanding of user needs. However, it’s often less reliable than quantitative research and deals with smaller samples, which may not represent the wider population.
Quantitative research is good for obtaining hard data and measuring people’s feelings about specific topics or activities. The downside is it’s less nuanced than qualitative research and may provide a less multifaceted analysis of user needs.
Using both qualitative and quantitative research methods, designers can get a complete picture of user needs.
When should you run a discovery session?
Use a discovery session any time the design team needs to move forward in a design and/or when relying on guesswork or intuition is impossible or risky.
Here are some common real-life examples:
- New market opportunities : companies that want to enter a new market must understand user needs and identify opportunities to fill current gaps.
- Rebranding : before rebranding , organizations have to understand how users feel about the current brand, what they want from it, and what issues to avoid moving forward.
- Redesign : when redesigning a product, design teams need to understand what users like and dislike about the current product and how they can innovate in the future .
- Mergers : to ease the transition, merging companies need to understand how employees from both companies feel about the merger and design processes to meet their needs .
- New organizational strategy : when implementing a new strategy, organizations must consider how employees view the upcoming changes and communicate plans and expectations .
- Organizational problems : companies that are struggling with organizational problems must investigate the root cause of the problem to develop effective solutions.
How do you run a discovery research session?
The exact route you take will depend on your goals. Sometimes, you’ll want to use a mixture of methods (the more, the better). At other times, you’ll focus on one or two options. Here are some common discovery research methods.
User interviews
Interviews are a common qualitative research method. They involve sitting down with users and asking open-ended questions about their needs, behaviors, and motivations. Interviews are very useful for understanding user feelings and attitudes in their own words prior to any design work taking place.
Focus groups
Focus groups are a type of qualitative research that involves a group of people discussing a topic together. Not only does this help you find out how people feel about a design, but it also draws out deeper responses as participants build on each other’s comments.
Tips for running a focus group
- The ideal group size is around eight to ten people. To get started, you’ll need to define the topic of discussion and prepare some questions to spark conversation.
- When conducting the focus group, it’s important to moderate the discussion effectively. Keep things on track, offer up discussion points if the momentum slows, and ensure everyone can speak.
- Once the focus group is over, analyze the data you collected. Write a transcript of the discussion, or use diagramming software to help with the analysis.
Surveys are a quantitative research method that asks closed-ended questions about user needs. However, including a few open-ended questions is common to provide context for a user’s responses to closed-ended questions. This type of research is useful for obtaining hard data.
Decide what type of questions you want to ask: closed-ended or open-ended. Closed-ended questions have a ‘yes’ or ‘no’ answer, or participants can choose a specific response from a list of options. Open-ended questions can have a longer, freeform answer subject to various conditions.
Ethnographic user research
Ethnographic user research is a form of qualitative research in which you observe users in their natural environment. This type of research is useful for understanding user behaviors and needs.
Tips for conducting ethnographic user research
- Define the scope of your research, and decide on the observation methods prior to session kick-off.
- Choose one to three research methods that suit your resources and goals. Interviews, surveys, and user testing are all valid forms of observation.
- Once you collect user data, collate and analyze it. At this point, you’ll have dense information. Turning the raw data into business intelligence that makes sense for the wider team and stakeholders is important.
Diary studies
Diary studies are a qualitative research method asking participants to write down their thoughts and feelings about a given topic. Journaling gives a glimpse of a user’s thought processes, so you can better understand how they feel about a design or prototype.
Here are some questions you can ask to get the user thinking:
- What were your thoughts and feelings about the design/prototype?
- How easy was it to use the product?
- What did you like or dislike about it?
- Why did you feel that way?
- What problems did you encounter?
- How well did the design meet your needs?
Diary logging techniques you need to know
- Interval-contingent protocol : ask participants to record their thoughts and feelings at fixed intervals (e.g., every hour or every day). Use this type of diary study to understand how people feel over time.
- Event-contingent protocol : ask participants to record their thoughts and feelings after specific events, such as using a feature or carrying out a particular process. Choose this format to study how people react to specific events.
- Saturation sampling : ask participants to keep a diary until they have nothing new to say about the topic. Similar to interval methods, this diary study helps evaluate user feelings over time.
- Choice sampling : give participants a list of topics to choose from and ask them to record their thoughts and feelings about their chosen topic. This study helps you understand how people feel about different design aspects and what issues are most important to them.
Tips for conducting diary studies
- Make sure you store the data securely if the diaries contain personal or sensitive information.
- Define the study’s goals and the logic you’ll use to evaluate the data you receive. Diary studies can be time-consuming for both participants and researchers. As such, ensuring the study is well-designed and the results are worth the effort is crucial.
- Provide participants with an incentive to take part. Diary studies require time and energy, so it’s a good idea to compensate participants with a gift voucher or free product.
Sort cards are a type of qualitative research that involves asking participants to sort a set of cards into groups. The goal is to observe how people think about a particular topic and design intuitive products.
Where else can you find data?
Chatting with users is important, but don’t neglect the wealth of data already at your fingertips. Web analytics, social media, and customer support data can give you insights into how your users think and feel.
- Business data : if you’re working on an internal tool, you probably have access to a lot of data about how it’s used. This information is invaluable for understanding the steps users take to perform an action or solve a problem.
- Web analytics data : this data tells you how people are using your website or app. Use it to understand what pages are being visited, how much time users spend on a page, and what elements they interact with.
- Social media data : social media can be a great way to understand how people feel about your brand. Use social listening tools to track mentions of your brand and see what people are saying.
- Customer support data : if you offer customer support, the data can show you what problems people encounter when using your product.
- Competitor resources : it’s worth looking at competitor resources, such as websites, blog posts, and whitepapers, for ideas on improving or differentiating your product.
Analyzing and assessing discovery research
So, you’ve got all this data. Now what?
It’s time to assess it. Evaluating your discovery research involves looking at the numbers and determining how it fits together. You can write a report or create a diagram or graph to help you visualize it all.
When assessing qualitative data, it is important to consider the following factors:
- The quality and reliability of the data : bad data could send you in the wrong direction. If in doubt, chuck it out.
- The quantity of the data : too much could be a burden when turning it into reports. Too little might give you unreliable results.
- The context of the data : make sure you apply data to the relevant area, but at the same time, don’t look at it in isolation.
- The meaning of the data : only include responses that directly answer your questions. Don’t include irrelevant or unclear data.
- The validity of the data : data goes out of date. Disregard anything that’s no longer relevant.
Final thoughts
Data visualization features, like those in Cacoo , can help turn all those numbers into insight that makes sense.
Resources like persona templates , user story maps , and other research and design diagrams can help you see patterns and trends in the data and communicate your findings to others — including stakeholders who might not have a technical background.
Remember, your top priority is to make the data as understandable as possible for everyone on the team — whatever their background. After all, data is only useful if it’s used and understood!
[Flowchart] Keep busy and fulfilled with a new hobby this fall
A detailed guide to the product design process
Subscribe to our newsletter.
Learn with Nulab to bring your best ideas to life
Discovery Research
Our mission is to generate new knowledge that could transform health in ways we can't always anticipate today.
We fund researchers across fields, disciplines and career stages. And we're changing how research is done, through improved tools, technology, methods and culture.
The challenge
We need new knowledge about life, health and wellbeing. But short-term goals in research are stifling ambition. Our financial and political independence means we can advance discoveries through a longer-term outlook that better supports researchers and invests in the tools, methodologies, technologies and research cultures that could transform health.
Through longer-term funding, support for improved research environments and advocating for policy action, we're creating the conditions to improve health for everyone.
Our goals
Transformative knowledge.
Research we fund across fields and disciplines generates new knowledge with the potential to transform life, health and wellbeing in ways that we may not have anticipated.
New generation of diverse research leaders
Researchers have the resources, time and freedom to develop their skills and potential.
Productive research environments
Advanced tools, technologies, methodologies and a thriving research culture are in place. This enables innovation and success in research.
Creating the conditions for discoveries to improve health: our vision for Discovery Research
Michael Dunn, Director of Discovery Research, shares his ambitions for transformative research that supports careers in environments where discoveries improve health.
Our approach
Funding research.
We're generating new knowledge with the potential to improve life, health and wellbeing. We don’t always know where new breakthroughs will come from, so our scope in discovery research is deliberately broad.
We run funding opportunities based on career stage three times a year. Research can involve observational, experimental or theoretical approaches. It can be carried out in the laboratory, office, clinic or field.
We're also creating the conditions for innovation through directed funding for institutions and major initiatives .
For example, we support critical research fields such as bioimaging and genomics.
Explore funded people and projects
Creating better research cultures
We are dedicated to improving research environments, ensuring that researchers have the necessary freedom, resources and time to develop their skills. We also work to break down barriers to promote collaboration and interdisciplinary research.
By doing this we encourage different questions to be asked, leading to new ideas and breakthroughs.
Learn how we’re improving research environments
Enabling innovation
We are working to improve the wider research ecosystem. We do this by supporting the development of tools, technologies and methodologies for innovation and success, and by bringing together the right expertise in the right environments.
For example, our Discovery Research Platforms are a £73 million investment to overcome practical, technological and methodological barriers in research.
Funding opportunities
We run recurring Discovery Research awards three times a year. These are for researchers at different career stages across all disciplines as long as the research has the potential to improve life, health and wellbeing.
Find out what we will and won’t fund in our Discovery Research programme
What is Discovery Research?
Discovery Research is our programme covering studies across fields and disciplines that lead to new knowledge of life, health and wellbeing.
We don't know where the next breakthrough will come from, so our scope is deliberately broad. We fund projects that range from the fundamentals of biology to the development of new methods, tools and technologies. We also fund population health studies exploring the social, ethical, cultural, political, economic and historical contexts of human health.
We also accept applications related to our strategic programmes: Climate and Health , Infectious Disease and Mental Health .
Our work in action
Enabling evidence-based improvements to maternity care for people with autism
Shaping the future of our health through the development of the Human Cell Atlas
Enabling advances in research by co- funding Diamond Light Source with the UK government, the UK’s national synchrotron
Convening technology developers and users to advance bioimaging and tackle barriers in discovery research
Supporting discovery research across the globe
Understanding how epistemic injustice impacts peoples experiences of healthcare across society
Michael Dunn
Director of Discovery Research
Christiane Hertz-Fowler
Head of Directed Activity
Morag Foreman
Head of Discovery Researchers
Head of Early Career and Career Development Researchers
Martin Smith
Head of Policy Lab
Connect with Martin :
Matthew Brown
Head of Digital Technology, Discovery Research and Mental Health
Tom Collins
Research Lead - Atomic and Molecules
Katrina Gold
Research Lead - Cells, Systems and Circuits
Connect with Katrina :
Luigi Martino
Research Lead - Tissues, Organs and Organisms
Connect with Luigi :
Paul Meller
Research Lead - Populations and Society
Our latest report
How the UK government can support research
The UK government can improve both the economy and people’s health by investing in research. Our manifesto for science outlines three priorities for an incoming government – and why they matter.
Sign up to our monthly newsletter
Stay up to date with some of the biggest stories in global health, and how we're advocating to improve health for everyone.
Learn about all our newsletters and subscribe .
By clicking subscribe, you agree to receive this newsletter. You can unsubscribe any time. For information about how we handle your data, please read our privacy notice .
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
What's the difference between "research" and "discovery"?
Two pillars of user experience legwork -- research and discovery*. But what's the difference between them, really?
In practice, I've heard the terms used nearly interchangeably, and sometimes it feels like "discovery" is invoked as just a fancy way of saying research. One answer that a quick Google search uncovered: in research we seek something specific, whereas discovery is open-ended in what we're trying to find. True?
And of course, in a dictionary sense, there's a clear difference between to research and to discover . When it comes to UX, though, do we mean something particular or different as it relates to our work, our methods, our process?
*Not to be confused with discovery as it relates to affordance .
- terminology
- user-research
4 Answers 4
Simply said, there is a causal relation between these two terms.
Research is the action, while discovery is the result. You discover something because you research it.
Research is the "process" and "discovery" is the product.
To name a few more differences, research can be extremely complex and diversified.
Research supports all kinds of strategies and proactive thinking, while discovery is simple, irrespective of its subject. You simply find something. Sure, afterwards you can embellish it and present it in a structured manner, but that's another process.
Discovery is also a subjective reality, whereas research is objective. As long as you engage in the process, you know that you are conducting a research. Discovery, on the other hand, is a matter of evaluating your outcome: you can either consider that you've made a discovery, or not. It is all about how you see the result of your research.
I think in terms of the core competencies of a UX practitioner being RESEARCH, DESIGN, PROTOTYPE and TESTING, it would be easiest to explain the difference in such a way that it would pass the "lay person's test".
That is, RESEARCH encompasses the process and procedure for removing uncertainty by collecting data, processing it and then analysing the information so that you can form an opinion about something. The outcome of doing research is to provide input towards the design process rather than basing the proposed solution on assumptions.
The term DISCOVERY, using the above general description of research, refers to the stage or phase in the research process where you are mapping out the problem domain or uncovering information that provides context to the research work that you are carrying out.
The outcome of the discovery phase is to narrow the scope of the UX work to something that can be summarized in a problem statment.
There is not one common definition: some say that research is a step in the discovery process; others seem to use research and discovery interchangeably ; someone even described them as one skillset ; or like the comment by @benny-skogberg understand discovery as an initial idea before doing research.
All are valid in their own sake; the one that makes sense to me is that discovery follows research , in the sense that good research of what is out there and analysis of results can lead the researchers to discover new aspects of a domain. I find that the idea that good research facilitates discovery works.
Relating to the UX process this is quite simple.
Discovery is the period spent finding out the extent of the job at hand. This means studying the brief, running internal workshops with the client, holding stakeholder interviews, etc. Stuff that really adds to the definition of the the task.
Research is trying to find out what product the users what at the end of the process - Now you know what the client wants, you need to find out how that sits with the users and how to keep them happy while solving the business problem.
Discovery usually only happens at the beginning of a project where research continues throughout design and development, right up to the delivery.
Not the answer you're looking for? Browse other questions tagged terminology user-research ux-field or ask your own question .
- The Overflow Blog
- Where developers feel AI coding tools are working—and where they’re missing...
- He sold his first company for billions. Now he’s building a better developer...
- Featured on Meta
- User activation: Learnings and opportunities
- Preventing unauthorized automated access to the network
Hot Network Questions
- Is Mankiw 's answer really correct? The expected benefit from installing the traffic light. ("Principles of Economics 9e" by Mankiw.)
- Does a ball fit in a pipe if they are exactly the same diameter?
- How long has given package been deferred due to phasing?
- Can we divide the story points across team members?
- What causes, and how to avoid, finger numbness?
- What are these Ports on Multimeter PCB & What Protocol are they Using?
- If a professor wants to hire a student themselves, how can they write a letter of recommendation for other universities?
- If I distribute GPLv2-licensed software as a bundle that uses a virtual machine to run it, do I need to open-source the virtual machine too?
- Is there any language which distinguishes between “un” as in “not” and “un” as in “inverse”?
- The answer is not ___
- Want a different order than permutation
- Complexity of computing minimum unsatisfiable core
- Can a US president put a substantial tariff on cars made in Mexico, within the confines of USMCA?
- Undamaged tire repeatedly deflating
- Could you suffocate someone to death with a big enough standing wave?
- In the absence of an agreement addressing the issue, is there any law giving a university copyright in an undergraduate student's class paper?
- Tic-tac-toe encode them all
- Does legislation on transgender healthcare affect medical researchers?
- What should you list as location in job application?
- Does the old type of rubber hose Dunlop valves haves any pros compared to the modern ones without rubber?
- I need a temporary hoist and track system to lift a dead chest freezer up through a tight stairwell
- What evidence exists for the historical name of Kuwohi Mountain (formerly Clingmans Dome)?
- How to push 10-ft long 4" PVC pipe into 90-deg Elbow Hub?
The importance of discovery research
Illuminating the intricate circuitry of the brain
The Queensland Brain Institute is a world-leading neuroscience research centre, established in 2003 with a vision to understand the mechanisms of brain function and progress treatments for neurological disorders and diseases.
The path scientists follow to discovery is never a straight line and is inspired by a deep desire to explore new ground.
Fundamental discovery science provides intellectual freedom for scientists to flex their creative muscle and develop ideas or concepts that capture their curiosity but don’t have an immediate or obvious outcome.
Take, for example, the use of fluorescent proteins to study the brain, a technique that has revolutionised neuroscience. It would never have resulted without Professor Osamu Shimomura asking "Why does the jellyfish Aequorea victoria glow bright green when agitated?"
After a disappointing day in the lab, Shimomura poured his experiment into the sink, never expecting the eureka moment that followed. The glowing water in the sink full of jellyfish before him led to the discovery of a green fluorescent protein (GFP).
Shimomura’s findings led to a proliferation of research by many scientists. His Nobel prize-winning studies completely changed research, allowing scientists to tag a protein or molecule of interest with a fluorescent marker. Professor Martin Chalfie took Shimomura’s breakthrough one step further, by injecting the gene that expresses GFP into the DNA of roundworm C. elegans . The resulting bright, fluorescent displays revealed where the gene expressed proteins.
In another experiment, Professor Roger Tsien modified the GFP gene so it would express multi-coloured arrays, resulting in the stunning and informative “brainbow”. The neurons of the mouse brain were mapped for the first time using this new technique, allowing researchers to distinguish individual neurons, study their connections and deduce how these connections affect function in the healthy and disordered brain.
The multi-coloured ‘brainbow’ occurs when neurons are distinguished by fluorescent proteins-a dazzling display that began with the simple observation of jellyfish. Credit: Leonie Kirszenblat
‘Fluorescence’ research represents decades of fearless commitment to embracing the unknown and culminated in a Nobel Prize. Even though the discovery was made serendipitously at the start, that moment has forever changed how scientists study the human brain - its hidden potential now on show.
Researchers at the Queensland Brain Institute continue this legacy by illuminating the intricate circuitry of the brain with advanced, cutting-edge technology to investigate promising new treatments and innovative applications.
The human brain has limitless opportunity for discovery and curiosity is the momentum driving our scientists to achieve new ground.
A message from QBI Director, Professor Pankaj Sah
Scientific discovery has had an indelible impact on our health and daily lives, but there is still so much to learn. Discovery research gives scientists the opportunity to take the risks needed to tackle the unknown – mistakes are part of the learning curve. The data that scientists generate guides new research endeavours to finding cures for diseases or lifestyle-improving applications.
The human brain is not only the object of our research, it is also the engine that drives discovery, with curiosity the momentum driving our scientists to break new ground. Empowering researchers with funding to undertake fundamental discovery research has tangible results.
A study* of new medicines approved by the USA’s Food and Drug Administration (FDA) found that every single one of the 210 new medicines approved from 2010-2016 was developed from discoveries in fundamental science. The benefits of discovery science are broader than new disease treatments, though. It also provides insight into new architectures for information processing and storage and delivers breakthroughs of which we can’t even dream right now. *Cleary, E. G. et al. Contribution of NIH funding to new drug approvals 2010–2016. Proc Natl Acad Sci USA 115, 2329-2334, doi/10.1073/pnas.1715368115 (2018).
Scientists from around the world have been attracted to the Queensland Brain Institute to join our quest to answer fundamental questions about the brain: how it forms, its structure, the cells of which it is composed, the genes it expresses, and how, ultimately, this knowledge underpins our interpretation of the world around us and our behaviour. The dedication to pursue those ideas, no matter how intangible or far-reaching, is providing the foundation for discoveries which will help lead to clinical outcomes.
Many research projects at the Queensland Brain Institute illustrate how fundamental discoveries are leading to quantifiable progress.
Professor Pankaj Sah. Credit: Patrick Hamilton.
Scientists from around the world been attracted to the Queensland Brain Institute to join our quest to answer fundamental questions about the brain: how it forms, its structure, the cells of which it is composed, the genes it expresses, and how, ultimately, this knowledge underpins our interpretation of the world around us and our behaviour. The dedication to pursue those ideas, no matter how intangible or far-reaching, is providing the foundation for discoveries which will help lead to clinical outcomes.
A piece of brain tissue the size of a grain of sand contains 100,000 neurons and a billion synapses.
Discovery Research
Support our research
Stay up to date - sign up to our e-newsletter.
- Login to Survey Tool Review Center
Why Your Business Needs Discovery Research
Summary: businesses often develop solutions without checking first if there is a need for them, paving the path to failure. discovery research can help various internal teams (product, marketing, hr, operations) understand underlying users' needs and problems and start the journey to create relevant solutions..
10 minutes to read. By author Michaela Mora on March 3, 2021 Topics: Business Strategy , Market Research , Qualitative Research , Quantitative Research , UX Research
Discovery research is exploratory research we need when we don’t know in which direction to go to find solutions to a problem. This is nothing new. In market research, we call this “exploratory research,” but I’d admit that “discovery” sounds more exciting.
“Build The Right Thing” is a good motto to describe the essence of this type of research in contrast with research used to validate hypotheses or solutions. Research for hypothesis validation should come after discovery research to support objectives that fall under the “Build the Thing Right” motto.
Discovery research is about how we define its goals, not about data collection methods to accomplish them .
Barriers to Discovery Research
Unfortunately, there is much confusion about these two types of research, often driven by, egos, hubris, short deadlines, small budgets, and lack of knowledge of research fundamentals.
One of the main barriers to discovery research is the belief among many C-suite executives and team leaders (product development, sales & marketing, customer service, etc.) that they already know what needs to be built, who the customers are, what they need, etc. This belief is often based on anecdotal evidence and accumulated industry experience fueling confirmation bias and “gut-feeling.”
Discovery research can be threatening in a business culture that doesn’t see failure as an opportunity to learn. This also happens in businesses where questioning assumptions and the status quo sounds like a foreign language.
I have seen this in teams inside established businesses, but it is also very common among startups with founders enamored with a solution based on personal experiences. They think they represent all the users, which is often very far from the truth.
The driving counterfactual question behind discovery research should be “What would be the cost of going in the wrong direction by skipping discovery research?”
The Solution and the Biggest Obstacle
The biggest obstacle, however, is that to fight this belief we need to deal with counterfactual hypotheses to assess the risks of not doing discovery research. Essentially, we need to estimate opportunity costs. This is a hard mental exercise for many decision-makers. It is representative of Daniel Kahneman’s System 2 thinking process (effortful, logical, calculating, conscious, infrequent).
Thinking in counterfactuals takes time and threatens deadlines.
The driving counterfactual question to trigger discovery research should be “What would be the cost of going in the wrong direction by skipping discovery research?”
This estimation requires thinking in terms of wasted time and money, among other things, by:
- Investing in the wrong product (s) and features.
- Targeting the wrong market segment.
- Setting the wrong prices.
- Committing to the wrong brand positioning.
- Launching the wrong marketing strategy.
- Hiring the wrong talent.
- Investing in equipment, tools, and processes that may be misaligned with customers’ needs.
To do this exercise in a productive way, we need to go beyond a thinking exercise. It often requires internal data gathering to get an accurate estimation of the opportunity cost.
Strategic vs. Tactical Discovery Research
Discovery research can be strategic or tactical in nature depending on the unknowns we are trying to discover across different user experience (UX) touchpoints (product development, pre-sale, point of sale, and post-sale).
Strategic Discovery Research
To decide on a long-term strategy for product development, marketing, and internal operations to support business growth, companies may need to conduct discovery research to answer questions such as:
- Is there a need in the market for our products and services?
- Which products and services should we develop to meet unmet needs?
- Which market segment(s) should we target?
- Who are our current customers in terms of needs, attitudes, and behaviors?
- How should we position our brand (s), product (s), service (s)? How are we different from our competitors in current and potential in customers’ minds?
- What are the gaps in our internal processes and systems to support a good customer experience across all UX touchpoints?
Strategic discovery research requires the involvement of stakeholders from different areas and at different levels in the company so the insights can really inform the next steps in the company strategy.
This type of research can be large in scope and requires a champion at the highest level, a team lead that can coordinate and move the initiative forward, and guidance from experienced researchers (if they are not the team lead).
Skip Strategic Discovery Research at your Own Peril
A good example of what can happen when discovery research is skipped is the failure of Quibi . This was a short-form streaming platform that generated content for viewing on mobile devices. It was founded by two well-known Hollywood names (Jeffrey Katzenberg and Meg Whitman), raised $1.75 billion from investors, went live in April 2020, and was forced to close in December 2020.
In a letter to employees and investors, the founders acknowledged there were “one or two reasons” for Quibi’s failure: The idea behind Quibi either “wasn’t strong enough to justify a stand-alone streaming service or the service’s launch in the middle of a pandemic was particularly ill-timed. Unfortunately, we will never know, but we suspect it’s been a combination of the two.”
The most amazing aspect of this case is how they were able to convince investors to give them $1.75B without providing evidence there was a need for another streaming service. How could they ignore market saturation we saw already in 2020 with services like Netflix, HBO Max, Hulu, CBC All Access, YouTube, and Peacock? My hypothesis is that hubris and egos were the driving force behind this epic failure.
Tactical Discovery Research
We also use discovery research to uncover solutions to problems at a more tactical level where the scope may be limited to specific areas in product development, marketing or operations. For example, we may want to explore:
- What are customers’ unmet needs in this particular area?
- What needs is this particular product meeting?
- What is the path to purchase for customers trying to buy product X?
- What are the purchase drivers for this specific product category? Which ones are driving customers to our products/services?
- What barriers to good customer service are we putting up for our customers?
- Why are customers not willing to pay for this product or service? What is missing in the value we are trying to offer?
Validation Research Confusion
It is at the tactical level where many teams confuse discovery research with validation research. For example, the C-suite or an internal team may have already decided on a solution and even created a prototype, and they want to find ways to make it fit for the target user, develop add-on features, or improve it. They may still have questions about the viability of the solution. They may not know how users will react to it or use it, but these are not discovery questions. These are hypothesis validation questions.
Any research done with a solution in mind is done under the assumption we already know this the right solution for our target user, and we just need more insights into how to make it better. We are trying to test hypotheses about how the customers may use it, what may be missing to improve the experience, the appeal of new feature ideas, etc.
When a team only engages in validation research, there is a high risk of going deeper in the wrong direction.
I have been involved in research projects in which the clients had resisted doing discovery research because they were already committed to a particular solution. Under the spell of the sunk-cost fallacy bias, they kept plowing ahead into a lost cause.
Sometimes, I have been able to break the spell by embedding discovery research questions into the validation research design. Unfortunately, at times, the spell is so strong they simply ignore parts of the results or keep rationalizing their decision to avoid the pain of recognizing failure.
Discovery Research Need & Frequency
Discovery research is often seen as a one-off event done at the beginning of a project or strategic initiative. However, businesses need to do it more often than they realize.
Iterative discovery research is central to the user-centric agile process that allows companies to align their strategy and tactics for product development, marketing, and internal operations with what customers need.
Short-deadlines and small budgets often conspire against the good intentions of executives and internal teams to include discovery research in the development and implementation of their strategies.
Furthermore, validation research “feels” more actionable and concrete, which leads to favoring research such as product concept testing, positioning concept testing, usability testing, pricing optimization, etc.
To determine the need and frequency of discovery research we should ask these questions:
- Do we really know which path to follow?
- Is there reliable evidence to support our assumptions that this is the right path to follow?
- What would be the cost of following the wrong path, if our assumptions are incorrect?
At every step of the process, if we are not sure we are embarking on building the right thing, or if there no sufficient and reliable evidence suggesting we are building the right thing, or if the cost of building the wrong thing is high, we need discovery research.
The cost criterium is a tricky one. It requires thinking of long- and short-term consequences. The initial cost of going with the wrong solution may be low, but long-term it may turn costly for the company.
Discovery Research Data Collection Methods
Discovery research is often associated with qualitative research given its exploration goals. However, we shouldn’t confuse methods with goals. Although discovery research is exploratory in its goals, we can use both qualitative and quantitative research methods to achieve them.
For example, a quantitative market segmentation looking to uncover target segments of interest in a particular product category is an exploration of that product space. We don’t know which segments will emerge if any. We would also need to do qualitative research to identify relevant variables before jumping into designing the segmentation survey, which would also be discovery research.
If you already have defined segments by specific criteria and what to learn whether your product is appealing to those segments, whether they would pay for it, how they would use it for improvement purposes, you are venturing into validation research.
Validation research can also use qualitative and quantitative research methods, although different methods provide different levels of validation. For instance, usability testing, both moderated and unmoderated, qualitative or quantitative is more about validation than exploration. Even in qualitative usability testing, we are trying to confirm or deny hypotheses about the use of physical or digital products.
For example, if we are testing the navigation of a website, the initial design assumes that users will follow a particular path to find a certain type of information or the shopping cart. As we observe and ask questions, we are finding evidence of how easy or difficult it is for users to accomplish the tasks we give them. Those insights are then used to make design changes that help users to do what they wanted to do.
If the hypothesis underlying our design is correct, the users will have little trouble doing what they came to do on the website, so our hypothesis is confirmed. If, on the other hand, they can’t find information, our design hypothesis is incorrect, and we need to make changes.
To summarize, the data collection methods we can use for discovery research may include:
- In-depth interviews (IDIs), sometimes called user interviews among UX practitioners.
- Diary studies.
- Ethnography (Digital or traditional).
- Contextual inquiry .
- Online bulletin board discussions.
- Focus groups.
When we do validation research, we test specific solutions to measure their viability and to make improvements. The most common methods used, often based on survey methodology are:
- Quantitative concept testing .
- Product user testing.
- Qualitative and quantitative user testing .
- A/B testing.
- Qualitative research with a focus on specific solutions, complementing quantitative research, before or after this is conducted.
Again, do NOT you confuse methods with goals. To reiterate, discovery research is about how we define its goals, not about data collection methods to accomplish them.
Discovery research is necessary to explore problems to help us decide on a potential solution. It can help mitigate risks and save time and money. Consequently, it should be seen as an investment for both long-term and short-term effective business decisions. It can guide decisions related to the next steps in product development, marketing and company organization, operation, and investment strategies.
If your business wants to survive long-term, you need to make sure you are building the right thing, and for that, you need discovery research.
Related Articles
- UX Research Methods For User-Centric Design
- When to Use Different Types of Market Research
- Your Market Research Plan to Succeed As a Startup
- Top Reason Why Businesses Fail & What To Do About It
- Savvy Businesses Realize the Value of Market Research
- 6 Decisions To Make When Designing Product Concept Tests
- What is User Experience?
- The Opportunity of UX Research Webinar
- What is Agile Product Development?
- Net Promoter Score Caveats
- How CX Research Can Help Product Managers
- Fighting In The CX Internal Culture Revolution
- How To Get An Organization To Invest in CX
- How To Get Better Customer Satisfaction Metrics
- How to Use Digital Ethnography to Understand Real Product Use
- Myths & Misunderstandings About UX – MR Realities Podcast
- 10 Cognitive Biases You Shouldn’t Ignore In Research
- Moderated or Unmoderated Usability Tests?
- 9 Product Development Strategies to Consider
- How to Align Business Goals With Market Research
- Why Faster, Cheaper, and Better Market Research Is a Dangerous Illusion
- How to Link Customer Loyalty To Profits
Subscribe to our newsletter to get notified about future articles
Subscribe and don’t miss anything!
Recent Articles
- How AI Can Further Remove Researchers in Search of Productivity and Lower Costs
- Re: Design/Growth Podcast – Researching User Experiences for Business Growth
- Why You Need Positioning Concept Testing in New Product Development
- Why Conjoint Analysis Is Best for Price Research
- The Rise of UX
- Making the Case Against the Van Westendorp Price Sensitivity Meter
- How to Future-Proof Experience Management and Your Business
- When Using Focus Groups Makes Sense
- How to Make Segmentation Research Actionable
- How To Integrate Market Research and UX Research for Desired Business Outcomes
Popular Articles
- Which Rating Scales Should I Use?
- What To Consider in Survey Design
- Step by Step Guide to the Market Research Process
- Write Winning Product Concepts To Get Accurate Results In Concept Tests
- What Is Market Research?
- How to Use Qualitative and Quantitative Research in Product Development
- 12 Research Techniques to Solve Choice Overload
- Concept Testing for UX Researchers
- UX Research Geeks Podcast – Using Market Research for Better Context in UX
- A Researcher’s Path – Data Stories Leaders At Work Podcast
- How to Leverage UX and Market Research To Understand Your Customers
- How To Improve Racial and Gender Inclusion in Survey Design
- Privacy Overview
- Strictly Necessary Cookies
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
Science-Education-Research
Prof. Keith S. Taber's site
Discovery research
A topic in research methodology
Research in one of the major traditions, or paradigms , of research is often referred to as discovery.
" Confirmatory research sets out to test a specific hypothesis to the exclusion of other considerations; whereas discovery research seeks to find out what might be important in understanding a research context, presenting findings as conjectural (e.g., 'suggestive', 'indicative') rather than definite" Taber, 2013 : 45
Read about paradigms
Biddle and Anderson (1986) contrast their ' confirmatory position ' with what they label the 'discovery perspective'. This term is used for approaches that,
"have in common the belief that social concepts and explanations are socially constructed by both citizens and social scientists. Social knowledge and its use are both assumed to be based on values …and social facts are uninterpretable outside of a theoretical, hence historical, context" Biddle and Anderson, 1986: 237
The historian of science, Patricia Fara, describes how,
"Historians proceed heuristically, oscillating between reading, writing, and thinking. When you start out, you have little idea which path you will eventually tunnel out through a confusing mass of detailed information. It is only when you try to explain what you have already discovered that you come to realise where the gaps lie and where you will need to probe further." Fara, 2012
When comparing work in the natural sciences (often undertaken with a positivist mindset) and disciplines such as history that rely on an interpretivist approach, it may seem that paradigmatic contrasts (such as 'discovery' versus 'confirmatory') belong in different domains of enquiry. Yet often in research into social phenomena (such as educational phenomena) it may make sense to see as discovery research as an exploratory phase necessary before confirmatory research can sensibly be employed.
"That much of educational research concerns the former, more exploratory, types of study may be partly related to the relative immaturity of educational research compared with the established natural sciences. However there are also inherent features of education that channel much research towards the discovery pole. One of these features…concerns the inherent complexity of educational phenomena, which are often embedded in situations from which they can not be readily be disembodied whilst retaining their integrity." Taber, 2014
However, the corollary is that discovery research may not be seen as definitive, as it always invites a confirmatory follow-up to test its inferences. This is especially so if different studies offer competing accounts (or complementary accounts of relevant features where it may be important to identify the most significant factors at work).
"No choice between competing interpretations can be achieved through Verstehen [ital] itself. The interpretations remain arbitrary until they are subjected to a test in the usual manner." Habermas, 1967/1998
Sources cited:
- Biddle, B. J., & Anderson, D. S. (1986). Theory, methods, knowledge and research on teaching. In M. C. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed.) pp. 230-252. New York: Macmillan.
- Habermas, J. (1967/1988). On the logic of the social sciences (S. W. Nicholason & J. A. Stark, Trans.). Cambridge: Polity Press.
- Fara, Patricia (2012) Erasmus Darwin. Sex, science and serendipity . Oxford University Press
Taber, K. S. (2013). Classroom-based Research and Evidence-based Practice: An introduction (2nd ed.). London: Sage.
- Taber, K. S. (2014). Methodological issues in science education research: a perspective from the philosophy of science. In M. R. Matthews (Ed.), International Handbook of Research in History, Philosophy and Science Teaching (Vol. 3, pp. 1839-1893): Springer Netherlands.
My introduction to educational research:
Share this:
- Click to email a link to a friend (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Table of Contents
- Random Entry
- Chronological
- Editorial Information
- About the SEP
- Editorial Board
- How to Cite the SEP
- Special Characters
- Advanced Tools
- Support the SEP
- PDFs for SEP Friends
- Make a Donation
- SEPIA for Libraries
- Entry Contents
Bibliography
Academic tools.
- Friends PDF Preview
- Author and Citation Info
- Back to Top
Scientific Discovery
Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). Most philosophical discussions of scientific discoveries focus on the generation of new hypotheses that fit or explain given data sets or allow for the derivation of testable consequences. Philosophical discussions of scientific discovery have been intricate and complex because the term “discovery” has been used in many different ways, both to refer to the outcome and to the procedure of inquiry. In the narrowest sense, the term “discovery” refers to the purported “eureka moment” of having a new insight. In the broadest sense, “discovery” is a synonym for “successful scientific endeavor” tout court. Some philosophical disputes about the nature of scientific discovery reflect these terminological variations.
Philosophical issues related to scientific discovery arise about the nature of human creativity, specifically about whether the “eureka moment” can be analyzed and about whether there are rules (algorithms, guidelines, or heuristics) according to which such a novel insight can be brought about. Philosophical issues also arise about the analysis and evaluation of heuristics, about the characteristics of hypotheses worthy of articulation and testing, and, on the meta-level, about the nature and scope of philosophical analysis itself. This essay describes the emergence and development of the philosophical problem of scientific discovery and surveys different philosophical approaches to understanding scientific discovery. In doing so, it also illuminates the meta-philosophical problems surrounding the debates, and, incidentally, the changing nature of philosophy of science.
1. Introduction
2. scientific inquiry as discovery, 3. elements of discovery, 4. pragmatic logics of discovery, 5. the distinction between the context of discovery and the context of justification, 6.1 discovery as abduction, 6.2 heuristic programming, 7. anomalies and the structure of discovery, 8.1 discoverability, 8.2 preliminary appraisal, 8.3 heuristic strategies, 9.1 kinds and features of creativity, 9.2 analogy, 9.3 mental models, 10. machine discovery, 11. social epistemology and discovery, 12. integrated approaches to knowledge generation, other internet resources, related entries.
Philosophical reflection on scientific discovery occurred in different phases. Prior to the 1930s, philosophers were mostly concerned with discoveries in the broad sense of the term, that is, with the analysis of successful scientific inquiry as a whole. Philosophical discussions focused on the question of whether there were any discernible patterns in the production of new knowledge. Because the concept of discovery did not have a specified meaning and was used in a very wide sense, almost all discussions of scientific method and practice could potentially be considered as early contributions to reflections on scientific discovery. In the course of the 18 th century, as philosophy of science and science gradually became two distinct endeavors with different audiences, the term “discovery” became a technical term in philosophical discussions. Different elements of scientific inquiry were specified. Most importantly, during the 19 th century, the generation of new knowledge came to be clearly and explicitly distinguished from its assessment, and thus the conditions for the narrower notion of discovery as the act or process of conceiving new ideas emerged. This distinction was encapsulated in the so-called “context distinction,” between the “context of discovery” and the “context of justification”.
Much of the discussion about scientific discovery in the 20 th century revolved around this distinction It was argued that conceiving a new idea is a non-rational process, a leap of insight that cannot be captured in specific instructions. Justification, by contrast, is a systematic process of applying evaluative criteria to knowledge claims. Advocates of the context distinction argued that philosophy of science is exclusively concerned with the context of justification. The assumption underlying this argument is that philosophy is a normative project; it determines norms for scientific practice. Given this assumption, only the justification of ideas, not their generation, can be the subject of philosophical (normative) analysis. Discovery, by contrast, can only be a topic for empirical study. By definition, the study of discovery is outside the scope of philosophy of science proper.
The introduction of the context distinction and the disciplinary distinction between empirical science studies and normative philosophy of science that was tied to it spawned meta-philosophical disputes. For a long time, philosophical debates about discovery were shaped by the notion that philosophical and empirical analyses are mutually exclusive. Some philosophers insisted, like their predecessors prior to the 1930s, that the philosopher’s tasks include the analysis of actual scientific practices and that scientific resources be used to address philosophical problems. They maintained that it is a legitimate task for philosophy of science to develop a theory of heuristics or problem solving. But this position was the minority view in philosophy of science until the last decades of the 20 th century. Philosophers of discovery were thus compelled to demonstrate that scientific discovery was in fact a legitimate part of philosophy of science. Philosophical reflections about the nature of scientific discovery had to be bolstered by meta-philosophical arguments about the nature and scope of philosophy of science.
Today, however, there is wide agreement that philosophy and empirical research are not mutually exclusive. Not only do empirical studies of actual scientific discoveries in past and present inform philosophical thought about the structure and cognitive mechanisms of discovery, but works in psychology, cognitive science, artificial intelligence and related fields have become integral parts of philosophical analyses of the processes and conditions of the generation of new knowledge. Social epistemology has opened up another perspective on scientific discovery, reconceptualizing knowledge generation as group process.
Prior to the 19 th century, the term “discovery” was used broadly to refer to a new finding, such as a new cure, an unknown territory, an improvement of an instrument, or a new method of measuring longitude. One strand of the discussion about discovery dating back to ancient times concerns the method of analysis as the method of discovery in mathematics and geometry, and, by extension, in philosophy and scientific inquiry. Following the analytic method, we seek to find or discover something – the “thing sought,” which could be a theorem, a solution to a geometrical problem, or a cause – by analyzing it. In the ancient Greek context, analytic methods in mathematics, geometry, and philosophy were not clearly separated; the notion of finding or discovering things by analysis was relevant in all these fields.
In the ensuing centuries, several natural and experimental philosophers, including Avicenna and Zabarella, Bacon and Boyle, the authors of the Port-Royal Logic and Newton, and many others, expounded rules of reasoning and methods for arriving at new knowledge. The ancient notion of analysis still informed these rules and methods. Newton’s famous thirty-first query in the second edition of the Opticks outlines the role of analysis in discovery as follows: “As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths … By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis” (Newton 1718, 380, see Koertge 1980, section VI). Early modern accounts of discovery captured knowledge-seeking practices in the study of living and non-living nature, ranging from astronomy and physics to medicine, chemistry, and agriculture. These rich accounts of scientific inquiry were often expounded to bolster particular theories about the nature of matter and natural forces and were not explicitly labeled “methods of discovery ”, yet they are, in fact, accounts of knowledge generation and proper scientific reasoning, covering topics such as the role of the senses in knowledge generation, observation and experimentation, analysis and synthesis, induction and deduction, hypotheses, probability, and certainty.
Bacon’s work is a prominent example. His view of the method of science as it is presented in the Novum Organum showed how best to arrive at knowledge about “form natures” (the most general properties of matter) via a systematic investigation of phenomenal natures. Bacon described how first to collect and organize natural phenomena and experimentally produced facts in tables, how to evaluate these lists, and how to refine the initial results with the help of further trials. Through these steps, the investigator would arrive at conclusions about the “form nature” that produces particular phenomenal natures. Bacon expounded the procedures of constructing and evaluating tables of presences and absences to underpin his matter theory. In addition, in his other writings, such as his natural history Sylva Sylvarum or his comprehensive work on human learning De Augmentis Scientiarium , Bacon exemplified the “art of discovery” with practical examples and discussions of strategies of inquiry.
Like Bacon and Newton, several other early modern authors advanced ideas about how to generate and secure empirical knowledge, what difficulties may arise in scientific inquiry, and how they could be overcome. The close connection between theories about matter and force and scientific methodologies that we find in early modern works was gradually severed. 18 th - and early 19 th -century authors on scientific method and logic cited early modern approaches mostly to model proper scientific practice and reasoning, often creatively modifying them ( section 3 ). Moreover, they developed the earlier methodologies of experimentation, observation, and reasoning into practical guidelines for discovering new phenomena and devising probable hypotheses about cause-effect relations.
It was common in 20 th -century philosophy of science to draw a sharp contrast between those early theories of scientific method and modern approaches. 20 th -century philosophers of science interpreted 17 th - and 18 th -century approaches as generative theories of scientific method. They function simultaneously as guides for acquiring new knowledge and as assessments of the knowledge thus obtained, whereby knowledge that is obtained “in the right way” is considered secure (Laudan 1980; Schaffner 1993: chapter 2). On this view, scientific methods are taken to have probative force (Nickles 1985). According to modern, “consequentialist” theories, propositions must be established by comparing their consequences with observed and experimentally produced phenomena (Laudan 1980; Nickles 1985). It was further argued that, when consequentialist theories were on the rise, the two processes of generation and assessment of an idea or hypothesis became distinct, and the view that the merit of a new idea does not depend on the way in which it was arrived at became widely accepted.
More recent research in history of philosophy of science has shown, however, that there was no such sharp contrast. Consequentialist ideas were advanced throughout the 18 th century, and the early modern generative theories of scientific method and knowledge were more pragmatic than previously assumed. Early modern scholars did not assume that this procedure would lead to absolute certainty. One could only obtain moral certainty for the propositions thus secured.
During the 18 th and 19 th centuries, the different elements of discovery gradually became separated and discussed in more detail. Discussions concerned the nature of observations and experiments, the act of having an insight and the processes of articulating, developing, and testing the novel insight. Philosophical discussion focused on the question of whether and to what extent rules could be devised to guide each of these processes.
Numerous 19 th -century scholars contributed to these discussions, including Claude Bernard, Auguste Comte, George Gore, John Herschel, W. Stanley Jevons, Justus von Liebig, John Stuart Mill, and Charles Sanders Peirce, to name only a few. William Whewell’s work, especially the two volumes of Philosophy of the Inductive Sciences of 1840, is a noteworthy and, later, much discussed contribution to the philosophical debates about scientific discovery because he explicitly distinguished the creative moment or “happy thought” as he called it from other elements of scientific inquiry and because he offered a detailed analysis of the “discoverer’s induction”, i.e., the pursuit and evaluation of the new insight. Whewell’s approach is not unique, but for late 20 th -century philosophers of science, his comprehensive, historically informed philosophy of discovery became a point of orientation in the revival of interest in scientific discovery processes.
For Whewell, discovery comprised three elements: the happy thought, the articulation and development of that thought, and the testing or verification of it. His account was in part a description of the psychological makeup of the discoverer. For instance, he held that only geniuses could have those happy thoughts that are essential to discovery. In part, his account was an account of the methods by which happy thoughts are integrated into the system of knowledge. According to Whewell, the initial step in every discovery is what he called “some happy thought, of which we cannot trace the origin; some fortunate cast of intellect, rising above all rules. No maxims can be given which inevitably lead to discovery” (Whewell 1996 [1840]: 186). An “art of discovery” in the sense of a teachable and learnable skill does not exist according to Whewell. The happy thought builds on the known facts, but according to Whewell it is impossible to prescribe a method for having happy thoughts.
In this sense, happy thoughts are accidental. But in an important sense, scientific discoveries are not accidental. The happy thought is not a wild guess. Only the person whose mind is prepared to see things will actually notice them. The “previous condition of the intellect, and not the single fact, is really the main and peculiar cause of the success. The fact is merely the occasion by which the engine of discovery is brought into play sooner or later. It is, as I have elsewhere said, only the spark which discharges a gun already loaded and pointed; and there is little propriety in speaking of such an accident as the cause why the bullet hits its mark.” (Whewell 1996 [1840]: 189).
Having a happy thought is not yet a discovery, however. The second element of a scientific discovery consists in binding together—“colligating”, as Whewell called it—a set of facts by bringing them under a general conception. Not only does the colligation produce something new, but it also shows the previously known facts in a new light. Colligation involves, on the one hand, the specification of facts through systematic observation, measurements and experiment, and on the other hand, the clarification of ideas through the exposition of the definitions and axioms that are tacitly implied in those ideas. This process is extended and iterative. The scientists go back and forth between binding together the facts, clarifying the idea, rendering the facts more exact, and so forth.
The final part of the discovery is the verification of the colligation involving the happy thought. This means, first and foremost, that the outcome of the colligation must be sufficient to explain the data at hand. Verification also involves judging the predictive power, simplicity, and “consilience” of the outcome of the colligation. “Consilience” refers to a higher range of generality (broader applicability) of the theory (the articulated and clarified happy thought) that the actual colligation produced. Whewell’s account of discovery is not a deductivist system. It is essential that the outcome of the colligation be inferable from the data prior to any testing (Snyder 1997).
Whewell’s theory of discovery clearly separates three elements: the non-analyzable happy thought or eureka moment; the process of colligation which includes the clarification and explication of facts and ideas; and the verification of the outcome of the colligation. His position that the philosophy of discovery cannot prescribe how to think happy thoughts has been a key element of 20 th -century philosophical reflection on discovery. In contrast to many 20 th -century approaches, Whewell’s philosophical conception of discovery also comprises the processes by which the happy thoughts are articulated. Similarly, the process of verification is an integral part of discovery. The procedures of articulation and test are both analyzable according to Whewell, and his conception of colligation and verification serve as guidelines for how the discoverer should proceed. To verify a hypothesis, the investigator needs to show that it accounts for the known facts, that it foretells new, previously unobserved phenomena, and that it can explain and predict phenomena which are explained and predicted by a hypothesis that was obtained through an independent happy thought-cum-colligation (Ducasse 1951).
Whewell’s conceptualization of scientific discovery offers a useful framework for mapping the philosophical debates about discovery and for identifying major issues of concern in 20 th -century philosophical debates. Until the late 20 th century, most philosophers operated with a notion of discovery that is narrower than Whewell’s. In more recent treatments of discovery, however, the scope of the term “discovery” is limited to either the first of these elements, the “happy thought”, or to the happy thought and its initial articulation. In the narrower conception, what Whewell called “verification” is not part of discovery proper. Secondly, until the late 20 th century, there was wide agreement that the eureka moment, narrowly construed, is an unanalyzable, even mysterious leap of insight. The main disagreements concerned the question of whether the process of developing a hypothesis (the “colligation” in Whewell’s terms) is, or is not, a part of discovery proper – and if it is, whether and how this process is guided by rules. Much of the controversies in the 20 th century about the possibility of a philosophy of discovery can be understood against the background of the disagreement about whether the process of discovery does or does not include the articulation and development of a novel thought. Philosophers also disagreed on the issue of whether it is a philosophical task to explicate these rules.
In early 20 th -century logical empiricism, the view that discovery is or at least crucially involves a non-analyzable creative act of a gifted genius was widespread. Alternative conceptions of discovery especially in the pragmatist tradition emphasize that discovery is an extended process, i.e., that the discovery process includes the reasoning processes through which a new insight is articulated and further developed.
In the pragmatist tradition, the term “logic” is used in the broad sense to refer to strategies of human reasoning and inquiry. While the reasoning involved does not proceed according to the principles of demonstrative logic, it is systematic enough to deserve the label “logical”. Proponents of this view argued that traditional (here: syllogistic) logic is an inadequate model of scientific discovery because it misrepresents the process of knowledge generation as grossly as the notion of an “aha moment”.
Early 20 th -century pragmatic logics of discovery can best be described as comprehensive theories of the mental and physical-practical operations involved in knowledge generation, as theories of “how we think” (Dewey 1910). Among the mental operations are classification, determination of what is relevant to an inquiry, and the conditions of communication of meaning; among the physical operations are observation and (laboratory) experiments. These features of scientific discovery are either not or only insufficiently represented by traditional syllogistic logic (Schiller 1917: 236–7).
Philosophers advocating this approach agree that the logic of discovery should be characterized as a set of heuristic principles rather than as a process of applying inductive or deductive logic to a set of propositions. These heuristic principles are not understood to show the path to secure knowledge. Heuristic principles are suggestive rather than demonstrative (Carmichael 1922, 1930). One recurrent feature in these accounts of the reasoning strategies leading to new ideas is analogical reasoning (Schiller 1917; Benjamin 1934, see also section 9.2 .). However, in academic philosophy of science, endeavors to develop more systematically the heuristics guiding discovery processes were soon eclipsed by the advance of the distinction between contexts of discovery and justification.
The distinction between “context of discovery” and “context of justification” dominated and shaped the discussions about discovery in 20 th -century philosophy of science. The context distinction marks the distinction between the generation of a new idea or hypothesis and the defense (test, verification) of it. As the previous sections have shown, the distinction among different elements of scientific inquiry has a long history but in the first half of the 20 th century, the distinction between the different features of scientific inquiry turned into a powerful demarcation criterion between “genuine” philosophy and other fields of science studies, which became potent in philosophy of science. The boundary between context of discovery (the de facto thinking processes) and context of justification (the de jure defense of the correctness of these thoughts) was now understood to determine the scope of philosophy of science, whereby philosophy of science is conceived as a normative endeavor. Advocates of the context distinction argue that the generation of a new idea is an intuitive, nonrational process; it cannot be subject to normative analysis. Therefore, the study of scientists’ actual thinking can only be the subject of psychology, sociology, and other empirical sciences. Philosophy of science, by contrast, is exclusively concerned with the context of justification.
The terms “context of discovery” and “context of justification” are often associated with Hans Reichenbach’s work. Reichenbach’s original conception of the context distinction is quite complex, however (Howard 2006; Richardson 2006). It does not map easily on to the disciplinary distinction mentioned above, because for Reichenbach, philosophy of science proper is partly descriptive. Reichenbach maintains that philosophy of science includes a description of knowledge as it really is. Descriptive philosophy of science reconstructs scientists’ thinking processes in such a way that logical analysis can be performed on them, and it thus prepares the ground for the evaluation of these thoughts (Reichenbach 1938: § 1). Discovery, by contrast, is the object of empirical—psychological, sociological—study. According to Reichenbach, the empirical study of discoveries shows that processes of discovery often correspond to the principle of induction, but this is simply a psychological fact (Reichenbach 1938: 403).
While the terms “context of discovery” and “context of justification” are widely used, there has been ample discussion about how the distinction should be drawn and what their philosophical significance is (c.f. Kordig 1978; Gutting 1980; Zahar 1983; Leplin 1987; Hoyningen-Huene 1987; Weber 2005: chapter 3; Schickore and Steinle 2006). Most commonly, the distinction is interpreted as a distinction between the process of conceiving a theory and the assessment of that theory, specifically the assessment of the theory’s epistemic support. This version of the distinction is not necessarily interpreted as a temporal distinction. In other words, it is not usually assumed that a theory is first fully developed and then assessed. Rather, generation and assessment are two different epistemic approaches to theory: the endeavor to articulate, flesh out, and develop its potential and the endeavor to assess its epistemic worth. Within the framework of the context distinction, there are two main ways of conceptualizing the process of conceiving a theory. The first option is to characterize the generation of new knowledge as an irrational act, a mysterious creative intuition, a “eureka moment”. The second option is to conceptualize the generation of new knowledge as an extended process that includes a creative act as well as some process of articulating and developing the creative idea.
Both of these accounts of knowledge generation served as starting points for arguments against the possibility of a philosophy of discovery. In line with the first option, philosophers have argued that neither is it possible to prescribe a logical method that produces new ideas nor is it possible to reconstruct logically the process of discovery. Only the process of testing is amenable to logical investigation. This objection to philosophies of discovery has been called the “discovery machine objection” (Curd 1980: 207). It is usually associated with Karl Popper’s Logic of Scientific Discovery .
The initial state, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis not to be susceptible of it. The question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge. This latter is concerned not with questions of fact (Kant’s quid facti ?) , but only with questions of justification or validity (Kant’s quid juris ?) . Its questions are of the following kind. Can a statement be justified? And if so, how? Is it testable? Is it logically dependent on certain other statements? Or does it perhaps contradict them? […]Accordingly I shall distinguish sharply between the process of conceiving a new idea, and the methods and results of examining it logically. As to the task of the logic of knowledge—in contradistinction to the psychology of knowledge—I shall proceed on the assumption that it consists solely in investigating the methods employed in those systematic tests to which every new idea must be subjected if it is to be seriously entertained. (Popper 2002 [1934/1959]: 7–8)
With respect to the second way of conceptualizing knowledge generation, many philosophers argue in a similar fashion that because the process of discovery involves an irrational, intuitive process, which cannot be examined logically, a logic of discovery cannot be construed. Other philosophers turn against the philosophy of discovery even though they explicitly acknowledge that discovery is an extended, reasoned process. They present a meta-philosophical objection argument, arguing that a theory of articulating and developing ideas is not a philosophical but a psychological or sociological theory. In this perspective, “discovery” is understood as a retrospective label, which is attributed as a sign of accomplishment to some scientific endeavors. Sociological theories acknowledge that discovery is a collective achievement and the outcome of a process of negotiation through which “discovery stories” are constructed and certain knowledge claims are granted discovery status (Brannigan 1981; Schaffer 1986, 1994).
The impact of the context distinction on 20 th -century studies of scientific discovery and on philosophy of science more generally can hardly be overestimated. The view that the process of discovery (however construed) is outside the scope of philosophy of science proper was widely shared amongst philosophers of science for most of the 20 th century. The last section shows that there were some attempts to develop logics of discovery in the 1920s and 1930s, especially in the pragmatist tradition. But for several decades, the context distinction dictated what philosophy of science should be about and how it should proceed. The dominant view was that theories of mental operations or heuristics had no place in philosophy of science and that, therefore, discovery was not a legitimate topic for philosophy of science. Until the last decades of the 20 th century, there were few attempts to challenge the disciplinary distinction tied to the context distinction. Only during the 1970s did the interest in philosophical approaches to discovery begin to increase again. But the context distinction remained a challenge for philosophies of discovery.
There are several lines of response to the disciplinary distinction tied to the context distinction. Each of these lines of response opens a philosophical perspective on discovery. Each proceeds on the assumption that philosophy of science may legitimately include some form of analysis of actual reasoning patterns as well as information from empirical sciences such as cognitive science, psychology, and sociology. All of these responses reject the idea that discovery is nothing but a mystical event. Discovery is conceived as an analyzable reasoning process, not just as a creative leap by which novel ideas spring into being fully formed. All of these responses agree that the procedures and methods for arriving at new hypotheses and ideas are no guarantee that the hypothesis or idea that is thus formed is necessarily the best or the correct one. Nonetheless, it is the task of philosophy of science to provide rules for making this process better. All of these responses can be described as theories of problem solving, whose ultimate goal is to make the generation of new ideas and theories more efficient.
But the different approaches to scientific discovery employ different terminologies. In particular, the term “logic” of discovery is sometimes used in a narrow sense and sometimes broadly understood. In the narrow sense, “logic” of discovery is understood to refer to a set of formal, generally applicable rules by which novel ideas can be mechanically derived from existing data. In the broad sense, “logic” of discovery refers to the schematic representation of reasoning procedures. “Logical” is just another term for “rational”. Moreover, while each of these responses combines philosophical analyses of scientific discovery with empirical research on actual human cognition, different sets of resources are mobilized, ranging from AI research and cognitive science to historical studies of problem-solving procedures. Also, the responses parse the process of scientific inquiry differently. Often, scientific inquiry is regarded as having two aspects, viz. generation and assessments of new ideas. At times, however, scientific inquiry is regarded as having three aspects, namely generation, pursuit or articulation, and assessment of knowledge. In the latter framework, the label “discovery” is sometimes used to refer just to generation and sometimes to refer to both generation and pursuit.
One response to the challenge of the context distinction draws on a broad understanding of the term “logic” to argue that we cannot but admit a general, domain-neutral logic if we do not want to assume that the success of science is a miracle (Jantzen 2016) and that a logic of scientific discovery can be developed ( section 6 ). Another response, drawing on a narrow understanding of the term “logic”, is to concede that there is no logic of discovery, i.e., no algorithm for generating new knowledge, but that the process of discovery follows an identifiable, analyzable pattern ( section 7 ).
Others argue that discovery is governed by a methodology . The methodology of discovery is a legitimate topic for philosophical analysis ( section 8 ). Yet another response assumes that discovery is or at least involves a creative act. Drawing on resources from cognitive science, neuroscience, computational research, and environmental and social psychology, philosophers have sought to demystify the cognitive processes involved in the generation of new ideas. Philosophers who take this approach argue that scientific creativity is amenable to philosophical analysis ( section 9.1 ).
All these responses assume that there is more to discovery than a eureka moment. Discovery comprises processes of articulating, developing, and assessing the creative thought, as well as the scientific community’s adjudication of what does, and does not, count as “discovery” (Arabatzis 1996). These are the processes that can be examined with the tools of philosophical analysis, augmented by input from other fields of science studies such as sociology, history, or cognitive science.
6. Logics of discovery after the context distinction
One way of responding to the demarcation criterion described above is to argue that discovery is a topic for philosophy of science because it is a logical process after all. Advocates of this approach to the logic of discovery usually accept the overall distinction between the two processes of conceiving and testing a hypothesis. They also agree that it is impossible to put together a manual that provides a formal, mechanical procedure through which innovative concepts or hypotheses can be derived: There is no discovery machine. But they reject the view that the process of conceiving a theory is a creative act, a mysterious guess, a hunch, a more or less instantaneous and random process. Instead, they insist that both conceiving and testing hypotheses are processes of reasoning and systematic inference, that both of these processes can be represented schematically, and that it is possible to distinguish better and worse paths to new knowledge.
This line of argument has much in common with the logics of discovery described in section 4 above but it is now explicitly pitched against the disciplinary distinction tied to the context distinction. There are two main ways of developing this argument. The first is to conceive of discovery in terms of abductive reasoning ( section 6.1 ). The second is to conceive of discovery in terms of problem-solving algorithms, whereby heuristic rules aid the processing of available data and enhance the success in finding solutions to problems ( section 6.2 ). Both lines of argument rely on a broad conception of logic, whereby the “logic” of discovery amounts to a schematic account of the reasoning processes involved in knowledge generation.
One argument, elaborated prominently by Norwood R. Hanson, is that the act of discovery—here, the act of suggesting a new hypothesis—follows a distinctive logical pattern, which is different from both inductive logic and the logic of hypothetico-deductive reasoning. The special logic of discovery is the logic of abductive or “retroductive” inferences (Hanson 1958). The argument that it is through an act of abductive inferences that plausible, promising scientific hypotheses are devised goes back to C.S. Peirce. This version of the logic of discovery characterizes reasoning processes that take place before a new hypothesis is ultimately justified. The abductive mode of reasoning that leads to plausible hypotheses is conceptualized as an inference beginning with data or, more specifically, with surprising or anomalous phenomena.
In this view, discovery is primarily a process of explaining anomalies or surprising, astonishing phenomena. The scientists’ reasoning proceeds abductively from an anomaly to an explanatory hypothesis in light of which the phenomena would no longer be surprising or anomalous. The outcome of this reasoning process is not one single specific hypothesis but the delineation of a type of hypotheses that is worthy of further attention (Hanson 1965: 64). According to Hanson, the abductive argument has the following schematic form (Hanson 1960: 104):
- Some surprising, astonishing phenomena p 1 , p 2 , p 3 … are encountered.
- But p 1 , p 2 , p 3 … would not be surprising were an hypothesis of H ’s type to obtain. They would follow as a matter of course from something like H and would be explained by it.
- Therefore there is good reason for elaborating an hypothesis of type H—for proposing it as a possible hypothesis from whose assumption p 1 , p 2 , p 3 … might be explained.
Drawing on the historical record, Hanson argues that several important discoveries were made relying on abductive reasoning, such as Kepler’s discovery of the elliptic orbit of Mars (Hanson 1958). It is now widely agreed, however, that Hanson’s reconstruction of the episode is not a historically adequate account of Kepler’s discovery (Lugg 1985). More importantly, while there is general agreement that abductive inferences are frequent in both everyday and scientific reasoning, these inferences are no longer considered as logical inferences. Even if one accepts Hanson’s schematic representation of the process of identifying plausible hypotheses, this process is a “logical” process only in the widest sense whereby the term “logical” is understood as synonymous with “rational”. Notably, some philosophers have even questioned the rationality of abductive inferences (Koehler 1991; Brem and Rips 2000).
Another argument against the above schema is that it is too permissive. There will be several hypotheses that are explanations for phenomena p 1 , p 2 , p 3 …, so the fact that a particular hypothesis explains the phenomena is not a decisive criterion for developing that hypothesis (Harman 1965; see also Blackwell 1969). Additional criteria are required to evaluate the hypothesis yielded by abductive inferences.
Finally, it is worth noting that the schema of abductive reasoning does not explain the very act of conceiving a hypothesis or hypothesis-type. The processes by which a new idea is first articulated remain unanalyzed in the above schema. The schema focuses on the reasoning processes by which an exploratory hypothesis is assessed in terms of its merits and promise (Laudan 1980; Schaffner 1993).
In more recent work on abduction and discovery, two notions of abduction are sometimes distinguished: the common notion of abduction as inference to the best explanation (selective abduction) and creative abduction (Magnani 2000, 2009). Selective abduction—the inference to the best explanation—involves selecting a hypothesis from a set of known hypotheses. Medical diagnosis exemplifies this kind of abduction. Creative abduction, by contrast, involves generating a new, plausible hypothesis. This happens, for instance, in medical research, when the notion of a new disease is articulated. However, it is still an open question whether this distinction can be drawn, or whether there is a more gradual transition from selecting an explanatory hypothesis from a familiar domain (selective abduction) to selecting a hypothesis that is slightly modified from the familiar set and to identifying a more drastically modified or altered assumption.
Another recent suggestion is to broaden Peirce’s original account of abduction and to include not only verbal information but also non-verbal mental representations, such as visual, auditory, or motor representations. In Thagard’s approach, representations are characterized as patterns of activity in mental populations (see also section 9.3 below). The advantage of the neural account of human reasoning is that it covers features such as the surprise that accompanies the generation of new insights or the visual and auditory representations that contribute to it. Surprise, for instance, could be characterized as resulting from rapid changes in activation of the node in a neural network representing the “surprising” element (Thagard and Stewart 2011). If all mental representations can be characterized as patterns of firing in neural populations, abduction can be analyzed as the combination or “convolution” (Thagard) of patterns of neural activity from disjoint or overlapping patterns of activity (Thagard 2010).
The concern with the logic of discovery has also motivated research on artificial intelligence at the intersection of philosophy of science and cognitive science. In this approach, scientific discovery is treated as a form of problem-solving activity (Simon 1973; see also Newell and Simon 1971), whereby the systematic aspects of problem solving are studied within an information-processing framework. The aim is to clarify with the help of computational tools the nature of the methods used to discover scientific hypotheses. These hypotheses are regarded as solutions to problems. Philosophers working in this tradition build computer programs employing methods of heuristic selective search (e.g., Langley et al. 1987). In computational heuristics, search programs can be described as searches for solutions in a so-called “problem space” in a certain domain. The problem space comprises all possible configurations in that domain (e.g., for chess problems, all possible arrangements of pieces on a board of chess). Each configuration is a “state” of the problem space. There are two special states, namely the goal state, i.e., the state to be reached, and the initial state, i.e., the configuration at the starting point from which the search begins. There are operators, which determine the moves that generate new states from the current state. There are path constraints, which limit the permitted moves. Problem solving is the process of searching for a solution of the problem of how to generate the goal state from an initial state. In principle, all states can be generated by applying the operators to the initial state, then to the resulting state, until the goal state is reached (Langley et al. 1987: chapter 9). A problem solution is a sequence of operations leading from the initial to the goal state.
The basic idea behind computational heuristics is that rules can be identified that serve as guidelines for finding a solution to a given problem quickly and efficiently by avoiding undesired states of the problem space. These rules are best described as rules of thumb. The aim of constructing a logic of discovery thus becomes the aim of constructing a heuristics for the efficient search for solutions to problems. The term “heuristic search” indicates that in contrast to algorithms, problem-solving procedures lead to results that are merely provisional and plausible. A solution is not guaranteed, but heuristic searches are advantageous because they are more efficient than exhaustive random trial and error searches. Insofar as it is possible to evaluate whether one set of heuristics is better—more efficacious—than another, the logic of discovery turns into a normative theory of discovery.
Arguably, because it is possible to reconstruct important scientific discovery processes with sets of computational heuristics, the scientific discovery process can be considered as a special case of the general mechanism of information processing. In this context, the term “logic” is not used in the narrow sense of a set of formal, generally applicable rules to draw inferences but again in a broad sense as a label for a set of procedural rules.
The computer programs that embody the principles of heuristic searches in scientific inquiry simulate the paths that scientists followed when they searched for new theoretical hypotheses. Computer programs such as BACON (Simon et al. 1981) and KEKADA (Kulkarni and Simon 1988) utilize sets of problem-solving heuristics to detect regularities in given data sets. The program would note, for instance, that the values of a dependent term are constant or that a set of values for a term x and a set of values for a term y are linearly related. It would thus “infer” that the dependent term always has that value or that a linear relation exists between x and y . These programs can “make discoveries” in the sense that they can simulate successful discoveries such as Kepler’s third law (BACON) or the Krebs cycle (KEKADA).
Computational theories of scientific discoveries have helped identify and clarify a number of problem-solving strategies. An example of such a strategy is heuristic means-ends analysis, which involves identifying specific differences between the present and the goal situation and searches for operators (processes that will change the situation) that are associated with the differences that were detected. Another important heuristic is to divide the problem into sub-problems and to begin solving the one with the smallest number of unknowns to be determined (Simon 1977). Computational approaches have also highlighted the extent to which the generation of new knowledge draws on existing knowledge that constrains the development of new hypotheses.
As accounts of scientific discoveries, the early computational heuristics have some limitations. Compared to the problem spaces given in computational heuristics, the complex problem spaces for scientific problems are often ill defined, and the relevant search space and goal state must be delineated before heuristic assumptions could be formulated (Bechtel and Richardson 1993: chapter 1). Because a computer program requires the data from actual experiments, the simulations cover only certain aspects of scientific discoveries; in particular, it cannot determine by itself which data is relevant, which data to relate and what form of law it should look for (Gillies 1996). However, as a consequence of the rise of so-called “deep learning” methods in data-intensive science, there is renewed philosophical interest in the question of whether machines can make discoveries ( section 10 ).
Many philosophers maintain that discovery is a legitimate topic for philosophy of science while abandoning the notion that there is a logic of discovery. One very influential approach is Thomas Kuhn’s analysis of the emergence of novel facts and theories (Kuhn 1970 [1962]: chapter 6). Kuhn identifies a general pattern of discovery as part of his account of scientific change. A discovery is not a simple act, but an extended, complex process, which culminates in paradigm changes. Paradigms are the symbolic generalizations, metaphysical commitments, values, and exemplars that are shared by a community of scientists and that guide the research of that community. Paradigm-based, normal science does not aim at novelty but instead at the development, extension, and articulation of accepted paradigms. A discovery begins with an anomaly, that is, with the recognition that the expectations induced by an established paradigm are being violated. The process of discovery involves several aspects: observations of an anomalous phenomenon, attempts to conceptualize it, and changes in the paradigm so that the anomaly can be accommodated.
It is the mark of success of normal science that it does not make transformative discoveries, and yet such discoveries come about as a consequence of normal, paradigm-guided science. The more detailed and the better developed a paradigm, the more precise are its predictions. The more precisely the researchers know what to expect, the better they are able to recognize anomalous results and violations of expectations:
novelty ordinarily emerges only for the man who, knowing with precision what he should expect, is able to recognize that something has gone wrong. Anomaly appears only against the background provided by the paradigm. (Kuhn 1970 [1962]: 65)
Drawing on several historical examples, Kuhn argues that it is usually impossible to identify the very moment when something was discovered or even the individual who made the discovery. Kuhn illustrates these points with the discovery of oxygen (see Kuhn 1970 [1962]: 53–56). Oxygen had not been discovered before 1774 and had been discovered by 1777. Even before 1774, Lavoisier had noticed that something was wrong with phlogiston theory, but he was unable to move forward. Two other investigators, C. W. Scheele and Joseph Priestley, independently identified a gas obtained from heating solid substances. But Scheele’s work remained unpublished until after 1777, and Priestley did not identify his substance as a new sort of gas. In 1777, Lavoisier presented the oxygen theory of combustion, which gave rise to fundamental reconceptualization of chemistry. But according to this theory as Lavoisier first presented it, oxygen was not a chemical element. It was an atomic “principle of acidity” and oxygen gas was a combination of that principle with caloric. According to Kuhn, all of these developments are part of the discovery of oxygen, but none of them can be singled out as “the” act of discovery.
In pre-paradigmatic periods or in times of paradigm crisis, theory-induced discoveries may happen. In these periods, scientists speculate and develop tentative theories, which may lead to novel expectations and experiments and observations to test whether these expectations can be confirmed. Even though no precise predictions can be made, phenomena that are thus uncovered are often not quite what had been expected. In these situations, the simultaneous exploration of the new phenomena and articulation of the tentative hypotheses together bring about discovery.
In cases like the discovery of oxygen, by contrast, which took place while a paradigm was already in place, the unexpected becomes apparent only slowly, with difficulty, and against some resistance. Only gradually do the anomalies become visible as such. It takes time for the investigators to recognize “both that something is and what it is” (Kuhn 1970 [1962]: 55). Eventually, a new paradigm becomes established and the anomalous phenomena become the expected phenomena.
Recent studies in cognitive neuroscience of brain activity during periods of conceptual change support Kuhn’s view that conceptual change is hard to achieve. These studies examine the neural processes that are involved in the recognition of anomalies and compare them with the brain activity involved in the processing of information that is consistent with preferred theories. The studies suggest that the two types of data are processed differently (Dunbar et al. 2007).
8. Methodologies of discovery
Advocates of the view that there are methodologies of discovery use the term “logic” in the narrow sense of an algorithmic procedure to generate new ideas. But like the AI-based theories of scientific discovery described in section 6 , methodologies of scientific discovery interpret the concept “discovery” as a label for an extended process of generating and articulating new ideas and often describe the process in terms of problem solving. In these approaches, the distinction between the contexts of discovery and the context of justification is challenged because the methodology of discovery is understood to play a justificatory role. Advocates of a methodology of discovery usually rely on a distinction between different justification procedures, justification involved in the process of generating new knowledge and justification involved in testing it. Consequential or “strong” justifications are methods of testing. The justification involved in discovery, by contrast, is conceived as generative (as opposed to consequential) justification ( section 8.1 ) or as weak (as opposed to strong) justification ( section 8.2 ). Again, some terminological ambiguity exists because according to some philosophers, there are three contexts, not two: Only the initial conception of a new idea (the creative act is the context of discovery proper, and between it and justification there exists a separate context of pursuit (Laudan 1980). But many advocates of methodologies of discovery regard the context of pursuit as an integral part of the process of justification. They retain the notion of two contexts and re-draw the boundaries between the contexts of discovery and justification as they were drawn in the early 20 th century.
The methodology of discovery has sometimes been characterized as a form of justification that is complementary to the methodology of testing (Nickles 1984, 1985, 1989). According to the methodology of testing, empirical support for a theory results from successfully testing the predictive consequences derived from that theory (and appropriate auxiliary assumptions). In light of this methodology, justification for a theory is “consequential justification,” the notion that a hypothesis is established if successful novel predictions are derived from the theory or claim. Generative justification complements consequential justification. Advocates of generative justification hold that there exists an important form of justification in science that involves reasoning to a claim from data or previously established results more generally.
One classic example for a generative methodology is the set of Newton’s rules for the study of natural philosophy. According to these rules, general propositions are established by deducing them from the phenomena. The notion of generative justification seeks to preserve the intuition behind classic conceptions of justification by deduction. Generative justification amounts to the rational reconstruction of the discovery path in order to establish its discoverability had the researchers known what is known now, regardless of how it was first thought of (Nickles 1985, 1989). The reconstruction demonstrates in hindsight that the claim could have been discovered in this manner had the necessary information and techniques been available. In other words, generative justification—justification as “discoverability” or “potential discovery”—justifies a knowledge claim by deriving it from results that are already established. While generative justification does not retrace exactly those steps of the actual discovery path that were actually taken, it is a better representation of scientists’ actual practices than consequential justification because scientists tend to construe new claims from available knowledge. Generative justification is a weaker version of the traditional ideal of justification by deduction from the phenomena. Justification by deduction from the phenomena is complete if a theory or claim is completely determined from what we already know. The demonstration of discoverability results from the successful derivation of a claim or theory from the most basic and most solidly established empirical information.
Discoverability as described in the previous paragraphs is a mode of justification. Like the testing of novel predictions derived from a hypothesis, generative justification begins when the phase of finding and articulating a hypothesis worthy of assessing is drawing to a close. Other approaches to the methodology of discovery are directly concerned with the procedures involved in devising new hypotheses. The argument in favor of this kind of methodology is that the procedures of devising new hypotheses already include elements of appraisal. These preliminary assessments have been termed “weak” evaluation procedures (Schaffner 1993). Weak evaluations are relevant during the process of devising a new hypothesis. They provide reasons for accepting a hypothesis as promising and worthy of further attention. Strong evaluations, by contrast, provide reasons for accepting a hypothesis as (approximately) true or confirmed. Both “generative” and “consequential” testing as discussed in the previous section are strong evaluation procedures. Strong evaluation procedures are rigorous and systematically organized according to the principles of hypothesis derivation or H-D testing. A methodology of preliminary appraisal, by contrast, articulates criteria for the evaluation of a hypothesis prior to rigorous derivation or testing. It aids the decision about whether to take that hypothesis seriously enough to develop it further and test it. For advocates of this version of the methodology of discovery, it is the task of philosophy of science to characterize sets of constraints and methodological rules guiding the complex process of prior-to-test evaluation of hypotheses.
In contrast to the computational approaches discussed above, strategies of preliminary appraisal are not regarded as subject-neutral but as specific to particular fields of study. Philosophers of biology, for instance, have developed a fine-grained framework to account for the generation and preliminary evaluation of biological mechanisms (Darden 2002; Craver 2002; Bechtel and Richardson 1993; Craver and Darden 2013). Some philosophers have suggested that the phase of preliminary appraisal be further divided into two phases, the phase of appraising and the phase of revising. According to Lindley Darden, the phases of generation, appraisal and revision of descriptions of mechanisms can be characterized as reasoning processes governed by reasoning strategies. Different reasoning strategies govern the different phases (Darden 1991, 2002; Craver 2002; Darden 2009). The generation of hypotheses about mechanisms, for instance, is governed by the strategy of “schema instantiation” (see Darden 2002). The discovery of the mechanism of protein synthesis involved the instantiation of an abstract schema for chemical reactions: reactant 1 + reactant 2 = product. The actual mechanism of protein synthesis was found through specification and modification of this schema.
Neither of these strategies is deemed necessary for discovery, and they are not prescriptions for biological research. Rather, these strategies are deemed sufficient for the discovery of mechanisms. The methodology of the discovery of mechanisms is an extrapolation from past episodes of research on mechanisms and the result of a synthesis of rational reconstructions of several of these historical episodes. The methodology of discovery is weakly normative in the sense that the strategies for the discovery of mechanisms that were successful in the past may prove useful in future biological research (Darden 2002).
As philosophers of science have again become more attuned to actual scientific practices, interest in heuristic strategies has also been revived. Many analysts now agree that discovery processes can be regarded as problem solving activities, whereby a discovery is a solution to a problem. Heuristics-based methodologies of discovery are neither purely subjective and intuitive nor algorithmic or formalizable; the point is that reasons can be given for pursuing one or the other problem-solving strategy. These rules are open and do not guarantee a solution to a problem when applied (Ippoliti 2018). On this view, scientific researchers are no longer seen as Kuhnian “puzzle solvers” but as problem solvers and decision makers in complex, variable, and changing environments (Wimsatt 2007).
Philosophers of discovery working in this tradition draw on a growing body of literature in cognitive psychology, management science, operations research, and economy on human reasoning and decision making in contexts with limited information, under time constraints, and with sub-optimal means (Gigerenzer & Sturm 2012). Heuristic strategies characterized in these studies, such as Gigerenzer’s “tools to theory heuristic” are then applied to understand scientific knowledge generation (Gigerenzer 1992, Nickles 2018). Other analysts specify heuristic strategies in a range of scientific fields, including climate science, neurobiology, and clinical medicine (Gramelsberger 2011, Schaffner 2008, Gillies 2018). Finally, in analytic epistemology, formal methods are developed to identify and assess distinct heuristic strategies currently in use, such as Bayesian reverse engineering in cognitive science (Zednik and Jäkel 2016).
As the literature on heuristics continues to grow, it has become clear that the term “heuristics” is itself used in a variety of different ways. (For a valuable taxonomy of meanings of “heuristic,” see Chow 2015, see also Ippoliti 2018.) Moreover, as in the context of earlier debates about computational heuristics, debates continue about the limitations of heuristics. The use of heuristics may come at a cost if heuristics introduce systematic biases (Wimsatt 2007). Some philosophers thus call for general principles for the evaluation of heuristic strategies (Hey 2016).
9. Cognitive perspectives on discovery
The approaches to scientific discovery presented in the previous sections focus on the adoption, articulation, and preliminary evaluation of ideas or hypotheses prior to rigorous testing, not on how a novel hypothesis or idea is first thought up. For a long time, the predominant view among philosophers of discovery was that the initial step of discovery is a mysterious intuitive leap of the human mind that cannot be analyzed further. More recent accounts of discovery informed by evolutionary biology also do not explicate how new ideas are formed. The generation of new ideas is akin to random, blind variations of thought processes, which have to be inspected by the critical mind and assessed as neutral, productive, or useless (Campbell 1960; see also Hull 1988), but the key processes by which new ideas are generated are left unanalyzed.
With the recent rapprochement among philosophy of mind, cognitive science and psychology and the increased integration of empirical research into philosophy of science, these processes have been submitted to closer analysis, and philosophical studies of creativity have seen a surge of interest (e.g. Paul & Kaufman 2014a). The distinctive feature of these studies is that they integrate philosophical analyses with empirical work from cognitive science, psychology, evolutionary biology, and computational neuroscience (Thagard 2012). Analysts have distinguished different kinds and different features of creative thinking and have examined certain features in depth, and from new angles. Recent philosophical research on creativity comprises conceptual analyses and integrated approaches based on the assumption that creativity can be analyzed and that empirical research can contribute to the analysis (Paul & Kaufman 2014b). Two key elements of the cognitive processes involved in creative thinking that have been in the focus of philosophical analysis are analogies ( section 9.2 ) and mental models ( section 9.3 ).
General definitions of creativity highlight novelty or originality and significance or value as distinctive features of a creative act or product (Sternberg & Lubart 1999, Kieran 2014, Paul & Kaufman 2014b, although see Hills & Bird 2019). Different kinds of creativity can be distinguished depending on whether the act or product is novel for a particular individual or entirely novel. Psychologist Margaret Boden distinguishes between psychological creativity (P-creativity) and historical creativity (H-creativity). P-creativity is a development that is new, surprising and important to the particular person who comes up with it. H-creativity, by contrast, is radically novel, surprising, and important—it is generated for the first time (Boden 2004). Further distinctions have been proposed, such as anthropological creativity (construed as a human condition) and metaphysical creativity, a radically new thought or action in the sense that it is unaccounted for by antecedents and available knowledge, and thus constitutes a radical break with the past (Kronfeldner 2009, drawing on Hausman 1984).
Psychological studies analyze the personality traits and creative individuals’ behavioral dispositions that are conducive to creative thinking. They suggest that creative scientists share certain distinct personality traits, including confidence, openness, dominance, independence, introversion, as well as arrogance and hostility. (For overviews of recent studies on personality traits of creative scientists, see Feist 1999, 2006: chapter 5).
Recent work on creativity in philosophy of mind and cognitive science offers substantive analyses of the cognitive and neural mechanisms involved in creative thinking (Abrams 2018, Minai et al 2022) and critical scrutiny of the romantic idea of genius creativity as something deeply mysterious (Blackburn 2014). Some of this research aims to characterize features that are common to all creative processes, such as Thagard and Stewart’s account according to which creativity results from combinations of representations (Thagard & Stewart 2011, but see Pasquale and Poirier 2016). Other research aims to identify the features that are distinctive of scientific creativity as opposed to other forms of creativity, such as artistic creativity or creative technological invention (Simonton 2014).
Many philosophers of science highlight the role of analogy in the development of new knowledge, whereby analogy is understood as a process of bringing ideas that are well understood in one domain to bear on a new domain (Thagard 1984; Holyoak and Thagard 1996). An important source for philosophical thought about analogy is Mary Hesse’s conception of models and analogies in theory construction and development. In this approach, analogies are similarities between different domains. Hesse introduces the distinction between positive, negative, and neutral analogies (Hesse 1966: 8). If we consider the relation between gas molecules and a model for gas, namely a collection of billiard balls in random motion, we will find properties that are common to both domains (positive analogy) as well as properties that can only be ascribed to the model but not to the target domain (negative analogy). There is a positive analogy between gas molecules and a collection of billiard balls because both the balls and the molecules move randomly. There is a negative analogy between the domains because billiard balls are colored, hard, and shiny but gas molecules do not have these properties. The most interesting properties are those properties of the model about which we do not know whether they are positive or negative analogies. This set of properties is the neutral analogy. These properties are the significant properties because they might lead to new insights about the less familiar domain. From our knowledge about the familiar billiard balls, we may be able to derive new predictions about the behavior of gas molecules, which we could then test.
Hesse offers a more detailed analysis of the structure of analogical reasoning through the distinction between horizontal and vertical analogies between domains. Horizontal analogies between two domains concern the sameness or similarity between properties of both domains. If we consider sound and light waves, there are similarities between them: sound echoes, light reflects; sound is loud, light is bright, both sound and light are detectable by our senses. There are also relations among the properties within one domain, such as the causal relation between sound and the loud tone we hear and, analogously, between physical light and the bright light we see. These analogies are vertical analogies. For Hesse, vertical analogies hold the key for the construction of new theories.
Analogies play several roles in science. Not only do they contribute to discovery but they also play a role in the development and evaluation of scientific theories. Current discussions about analogy and discovery have expanded and refined Hesse’s approach in various ways. Some philosophers have developed criteria for evaluating analogy arguments (Bartha 2010). Other work has identified highly significant analogies that were particularly fruitful for the advancement of science (Holyoak and Thagard 1996: 186–188; Thagard 1999: chapter 9). The majority of analysts explore the features of the cognitive mechanisms through which aspects of a familiar domain or source are applied to an unknown target domain in order to understand what is unknown. According to the influential multi-constraint theory of analogical reasoning developed by Holyoak and Thagard, the transfer processes involved in analogical reasoning (scientific and otherwise) are guided or constrained in three main ways: 1) by the direct similarity between the elements involved; 2) by the structural parallels between source and target domain; as well as 3) by the purposes of the investigators, i.e., the reasons why the analogy is considered. Discovery, the formulation of a new hypothesis, is one such purpose.
“In vivo” investigations of scientists reasoning in their laboratories have not only shown that analogical reasoning is a key component of scientific practice, but also that the distance between source and target depends on the purpose for which analogies are sought. Scientists trying to fix experimental problems draw analogies between targets and sources from highly similar domains. In contrast, scientists attempting to formulate new models or concepts draw analogies between less similar domains. Analogies between radically different domains, however, are rare (Dunbar 1997, 2001).
In current cognitive science, human cognition is often explored in terms of model-based reasoning. The starting point of this approach is the notion that much of human reasoning, including probabilistic and causal reasoning as well as problem solving takes place through mental modeling rather than through the application of logic or methodological criteria to a set of propositions (Johnson-Laird 1983; Magnani et al. 1999; Magnani and Nersessian 2002). In model-based reasoning, the mind constructs a structural representation of a real-world or imaginary situation and manipulates this structure. In this perspective, conceptual structures are viewed as models and conceptual innovation as constructing new models through various modeling operations. Analogical reasoning—analogical modeling—is regarded as one of three main forms of model-based reasoning that appear to be relevant for conceptual innovation in science. Besides analogical modeling, visual modeling and simulative modeling or thought experiments also play key roles (Nersessian 1992, 1999, 2009). These modeling practices are constructive in that they aid the development of novel mental models. The key elements of model-based reasoning are the call on knowledge of generative principles and constraints for physical models in a source domain and the use of various forms of abstraction. Conceptual innovation results from the creation of new concepts through processes that abstract and integrate source and target domains into new models (Nersessian 2009).
Some critics have argued that despite the large amount of work on the topic, the notion of mental model is not sufficiently clear. Thagard seeks to clarify the concept by characterizing mental models in terms of neural processes (Thagard 2010). In his approach, mental models are produced through complex patterns of neural firing, whereby the neurons and the interconnections between them are dynamic and changing. A pattern of firing neurons is a representation when there is a stable causal correlation between the pattern or activation and the thing that is represented. In this research, questions about the nature of model-based reasoning are transformed into questions about the brain mechanisms that produce mental representations.
The above sections again show that the study of scientific discovery integrates different approaches, combining conceptual analysis of processes of knowledge generation with empirical work on creativity, drawing heavily and explicitly on current research in psychology and cognitive science, and on in vivo laboratory observations, as well as brain imaging techniques (Kounios & Beeman 2009, Thagard & Stewart 2011).
Earlier critics of AI-based theories of scientific discoveries argued that a computer cannot devise new concepts but is confined to the concepts included in the given computer language (Hempel 1985: 119–120). It cannot design new experiments, instruments, or methods. Subsequent computational research on scientific discovery was driven by the motivation to contribute computational tools to aid scientists in their research (Addis et al. 2016). It appears that computational methods can be used to generate new results leading to refereed scientific publications in astrophysics, cancer research, ecology, and other fields (Langley 2000). However, the philosophical discussion has continued about the question of whether these methods really generate new knowledge or whether they merely speed up data processing. It is also still an open question whether data-intensive science is fundamentally different from traditional research, for instance regarding the status of hypothesis or theory in data-intensive research (Pietsch 2015).
In the wake of recent developments in machine learning, some older discussions about automated discovery have been revived. The availability of vastly improved computational tools and software for data analysis has stimulated new discussions about computer-generated discovery (see Leonelli 2020). It is largely uncontroversial that machine learning tools can aid discovery, for instance in research on antibiotics (Stokes et al, 2020). The notion of “robot scientist” is mostly used metaphorically, and the vision that human scientists may one day be replaced by computers – by successors of the laboratory automation systems “Adam” and “Eve”, allegedly the first “robot scientists” – is evoked in writings for broader audiences (see King et al. 2009, Williams et al. 2015, for popularized descriptions of these systems), although some interesting ethical challenges do arise from “superhuman AI” (see Russell 2021). It also appears that, on the notion that products of creative acts are both novel and valuable, AI systems should be called “creative,” an implication which not all analysts will find plausible (Boden 2014)
Philosophical analyses focus on various questions arising from the processes involving human-machine complexes. One issue relevant to the problem of scientific discovery arises from the opacity of machine learning. If machine learning indeed escapes human understanding, how can we be warranted to say that knowledge or understanding is generated by deep learning tools? Might we have reason to say that humans and machines are “co-developers” of knowledge (Tamaddoni-Nezhad et al. 2021)?
New perspectives on scientific discovery have also opened up in the context of social epistemology (see Goldman & O’Connor 2021). Social epistemology investigates knowledge production as a group process, specifically the epistemic effects of group composition in terms of cognitive diversity and unity and social interactions within groups or institutions such as testimony and trust, peer disagreement and critique, and group justification, among others. On this view, discovery is a collective achievement, and the task is to explore how assorted social-epistemic activities or practices have an impact on the knowledge generated by groups in question. There are obvious implications for debates about scientific discovery of recent research in the different branches of social epistemology. Social epistemologists have examined individual cognitive agents in their roles as group members (as providers of information or as critics) and the interactions among these members (Longino 2001), groups as aggregates of diverse agents, or the entire group as epistemic agent (e.g., Koons 2021, Dragos 2019).
Standpoint theory, for instance, explores the role of outsiders in knowledge generation, considering how the sociocultural structures and practices in which individuals are embedded aid or obstruct the generation of creative ideas. According to standpoint theorists, people with standpoint are politically aware and politically engaged people outside the mainstream. Because people with standpoint have different experiences and access to different domains of expertise than most members of a culture, they can draw on rich conceptual resources for creative thinking (Solomon 2007).
Social epistemologists examining groups as aggregates of agents consider to what extent diversity among group members is conducive to knowledge production and whether and to what extent beliefs and attitudes must be shared among group members to make collective knowledge possible (Bird 2014). This is still an open question. Some formal approaches to model the influence of diversity on knowledge generation suggest that cognitive diversity is beneficial to collective knowledge generation (Weisberg and Muldoon 2009), but others have criticized the model (Alexander et al (2015), see also Thoma (2015) and Poyhönen (2017) for further discussion).
This essay has illustrated that philosophy of discovery has come full circle. Philosophy of discovery has once again become a thriving field of philosophical study, now intersecting with, and drawing on philosophical and empirical studies of creative thinking, problem solving under uncertainty, collective knowledge production, and machine learning. Recent approaches to discovery are typically explicitly interdisciplinary and integrative, cutting across previous distinctions among hypothesis generation and theory building, data collection, assessment, and selection; as well as descriptive-analytic, historical, and normative perspectives (Danks & Ippoliti 2018, Michel 2021). The goal no longer is to provide one overarching account of scientific discovery but to produce multifaceted analyses of past and present activities of knowledge generation in all their complexity and heterogeneity that are illuminating to the non-scientist and the scientific researcher alike.
- Abraham, A. 2019, The Neuroscience of Creativity, Cambridge: Cambridge University Press.
- Addis, M., Sozou, P.D., Gobet, F. and Lane, P. R., 2016, “Computational scientific discovery and cognitive science theories”, in Mueller, V. C. (ed.) Computing and Philosophy , Springer, 83–87.
- Alexander, J., Himmelreich, J., and Thompson, C. 2015, Epistemic Landscapes, Optimal Search, and the Division of Cognitive Labor, Philosophy of Science 82: 424–453.
- Arabatzis, T. 1996, “Rethinking the ‘Discovery’ of the Electron,” Studies in History and Philosophy of Science Part B Studies In History and Philosophy of Modern Physics , 27: 405–435.
- Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
- Bechtel, W. and R. Richardson, 1993, Discovering Complexity , Princeton: Princeton University Press.
- Benjamin, A.C., 1934, “The Mystery of Scientific Discovery ” Philosophy of Science , 1: 224–36.
- Bird, A. 2014, “When is There a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject”, in J. Lackey (ed.), Essays in Collective Epistemology , Oxford: Oxford University Press, 42–63.
- Blackburn, S. 2014, “Creativity and Not-So-Dumb Luck”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0008.
- Blackwell, R.J., 1969, Discovery in the Physical Sciences , Notre Dame: University of Notre Dame Press.
- Boden, M.A., 2004, The Creative Mind: Myths and Mechanisms , London: Routledge.
- –––, 2014, “Creativity and Artificial Intelligence: A Contradiction in Terms?”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0012 .
- Brannigan, A., 1981, The Social Basis of Scientific Discoveries , Cambridge: Cambridge University Press.
- Brem, S. and L.J. Rips, 2000, “Explanation and Evidence in Informal Argument”, Cognitive Science , 24: 573–604.
- Campbell, D., 1960, “Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes”, Psychological Review , 67: 380–400.
- Carmichael, R.D., 1922, “The Logic of Discovery”, The Monist , 32: 569–608.
- –––, 1930, The Logic of Discovery , Chicago: Open Court.
- Chow, S. 2015, “Many Meanings of ‘Heuristic’”, British Journal for the Philosophy of Science , 66: 977–1016
- Craver, C.F., 2002, “Interlevel Experiments, Multilevel Mechanisms in the Neuroscience of Memory”, Philosophy of Science Supplement , 69: 83–97.
- Craver, C.F. and L. Darden, 2013, In Search of Mechanisms: Discoveries across the Life Sciences , Chicago: University of Chicago Press.
- Curd, M., 1980, “The Logic of Discovery: An Analysis of Three Approaches”, in T. Nickles (ed.) Scientific Discovery, Logic, and Rationality , Dordrecht: D. Reidel, 201–19.
- Danks, D. & Ippoliti, E. (eds.) 2018, Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer.
- Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , New York: Oxford University Press.
- –––, 2002, “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward/Backward Chaining”, Philosophy of Science , 69: S354-S65.
- –––, 2009, “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and Incorrectness”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 43–55.
- Dewey, J. 1910, How We Think . Boston: D.C. Heath
- Dragos, C., 2019, “Groups Can Know How” American Philosophical Quarterly 56: 265–276
- Ducasse, C.J., 1951, “Whewell’s Philosophy of Scientific Discovery II”, The Philosophical Review , 60(2): 213–34.
- Dunbar, K., 1997, “How scientists think: On-line creativity and conceptual change in science”, in T.B. Ward, S.M. Smith, and J. Vaid (eds.), Conceptual Structures and Processes: Emergence, Discovery, and Change , Washington, DC: American Psychological Association Press, 461–493.
- –––, 2001, “The Analogical Paradox: Why Analogy is so Easy in Naturalistic Settings Yet so Difficult in Psychological Laboratories”, in D. Gentner, K.J. Holyoak, and B.N. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science , Cambridge, MA: MIT Press.
- Dunbar, K, J. Fugelsang, and C Stein, 2007, “Do Naïve Theories Ever Go Away? Using Brain and Behavior to Understand Changes in Concepts”, in M. Lovett and P. Shah (eds.), Thinking with Data: 33rd Carnegie Symposium on Cognition , Mahwah: Erlbaum, 193–205.
- Feist, G.J., 1999, “The Influence of Personality on Artistic and Scientific Creativity”, in R.J. Sternberg (ed.), Handbook of Creativity , New York: Cambridge University Press, 273–96.
- –––, 2006, The psychology of science and the origins of the scientific mind , New Haven: Yale University Press.
- Gillies D., 1996, Artificial intelligence and scientific method . Oxford: Oxford University Press.
- –––, 2018 “Discovering Cures in Medicine” in Danks, D. & Ippoliti, E. (eds.), Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 83–100.
- Goldman, Alvin & O’Connor, C., 2021, “Social Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/epistemology-social/>.
- Gramelsberger, G. 2011, “What Do Numerical (Climate) Models Really Represent?” Studies in History and Philosophy of Science 42: 296–302.
- Gutting, G., 1980, “Science as Discovery”, Revue internationale de philosophie , 131: 26–48.
- Hanson, N.R., 1958, Patterns of Discovery , Cambridge: Cambridge University Press.
- –––, 1960, “Is there a Logic of Scientific Discovery?”, Australasian Journal of Philosophy , 38: 91–106.
- –––, 1965, “Notes Toward a Logic of Discovery”, in R.J. Bernstein (ed.), Perspectives on Peirce. Critical Essays on Charles Sanders Peirce , New Haven and London: Yale University Press, 42–65.
- Harman, G.H., 1965, “The Inference to the Best Explanation”, Philosophical Review , 74.
- Hausman, C. R. 1984, A Discourse on Novelty and Creation , New York: SUNY Press.
- Hempel, C.G., 1985, “Thoughts in the Limitations of Discovery by Computer”, in K. Schaffner (ed.), Logic of Discovery and Diagnosis in Medicine , Berkeley: University of California Press, 115–22.
- Hesse, M., 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
- Hey, S. 2016 “Heuristics and Meta-heuristics in Scientific Judgement”, British Journal for the Philosophy of Science , 67: 471–495
- Hills, A., Bird, A. 2019, “Against Creativity”, Philosophy and Phenomenological Research , 99: 694–713.
- Holyoak, K.J. and P. Thagard, 1996, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
- Howard, D., 2006, “Lost Wanderers in the Forest of Knowledge: Some Thoughts on the Discovery-Justification Distinction”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 3–22.
- Hoyningen-Huene, P., 1987, “Context of Discovery and Context of Justification”, Studies in History and Philosophy of Science , 18: 501–15.
- Hull, D.L., 1988, Science as Practice: An Evolutionary Account of the Social and Conceptual Development of Science , Chicago: University of Chicago Press.
- Ippoliti, E. 2018, “Heuristic Logic. A Kernel” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 191–212
- Jantzen, B.C., 2016, “Discovery without a ‘Logic’ would be a Miracle”, Synthese , 193: 3209–3238.
- Johnson-Laird, P., 1983, Mental Models , Cambridge: Cambridge University Press.
- Kieran, M., 2014, “Creativity as a Virtue of Character,” in E. Paul and S. B. Kaufman (eds.), The Philosophy of Creativity: New Essays . Oxford: Oxford University Press, 125–44
- King, R. D. et al. 2009, “The Automation of Science”, Science 324: 85–89.
- Koehler, D.J., 1991, “Explanation, Imagination, and Confidence in Judgment”, Psychological Bulletin , 110: 499–519.
- Koertge, N. 1980, “Analysis as a Method of Discovery during the Scientific Revolution” in Nickles, T. (ed.) Scientific Discovery, Logic, and Rationality vol. I, Dordrecht: Reidel, 139–157
- Koons, J.R. 2021, “Knowledge as a Collective Status”, Analytic Philosophy , https://doi.org/10.1111/phib.12224
- Kounios, J. and Beeman, M. 2009, “The Aha! Moment : The Cognitive Neuroscience of Insight”, Current Directions in Psychological Science , 18: 210–16.
- Kordig, C., 1978, “Discovery and Justification”, Philosophy of Science , 45: 110–17.
- Kronfeldner, M. 2009, “Creativity Naturalized”, The Philosophical Quarterly 59: 577–592.
- Kuhn, T.S., 1970 [1962], The Structure of Scientific Revolutions , 2 nd edition, Chicago: The University of Chicago Press; first edition, 1962.
- Kulkarni, D. and H.A. Simon, 1988, “The processes of scientific discovery: The strategy of experimentation”, Cognitive Science , 12: 139–76.
- Langley, P., 2000, “The Computational Support of Scientific Discovery”, International Journal of Human-Computer Studies , 53: 393–410.
- Langley, P., H.A. Simon, G.L. Bradshaw, and J.M. Zytkow, 1987, Scientific Discovery: Computational Explorations of the Creative Processes , Cambridge, MA: MIT Press.
- Laudan, L., 1980, “Why Was the Logic of Discovery Abandoned?” in T. Nickles (ed.), Scientific Discovery (Volume I), Dordrecht: D. Reidel, 173–83.
- Leonelli, S. 2020, “Scientific Research and Big Data”, The Stanford Encyclopedia of Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/science-big-data/>
- Leplin, J., 1987, “The Bearing of Discovery on Justification”, Canadian Journal of Philosophy , 17: 805–14.
- Longino, H. 2001, The Fate of Knowledge , Princeton: Princeton University Press
- Lugg, A., 1985, “The Process of Discovery”, Philosophy of Science , 52: 207–20.
- Magnani, L., 2000, Abduction, Reason, and Science: Processes of Discovery and Explanation , Dordrecht: Kluwer.
- –––, 2009, “Creative Abduction and Hypothesis Withdrawal”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer.
- Magnani, L. and N.J. Nersessian, 2002, Model-Based Reasoning: Science, Technology, and Values , Dordrecht: Kluwer.
- Magnani, L., N.J. Nersessian, and P. Thagard, 1999, Model-Based Reasoning in Scientific Discovery , Dordrecht: Kluwer.
- Michel, J. (ed.) 2021, Making Scientific Discoveries. Interdisciplinary Reflections , Brill | mentis.
- Minai, A., Doboli, S., Iyer, L. 2022 “Models of Creativity and Ideation: An Overview” in Ali A. Minai, Jared B. Kenworthy, Paul B. Paulus, Simona Doboli (eds.), Creativity and Innovation. Cognitive, Social, and Computational Approaches , Springer, 21–46.
- Nersessian, N.J., 1992, “How do scientists think? Capturing the dynamics of conceptual change in science”, in R. Giere (ed.), Cognitive Models of Science , Minneapolis: University of Minnesota Press, 3–45.
- –––, 1999, “Model-based reasoning in conceptual change”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery , New York: Kluwer, 5–22.
- –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture ” in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 127–66.
- Newell, A. and H. A Simon, 1971, “Human Problem Solving: The State of the Theory in 1970”, American Psychologist , 26: 145–59.
- Newton, I. 1718, Opticks; or, A Treatise of the Reflections, Inflections and Colours of Light , London: Printed for W. and J. Innys, Printers to the Royal Society.
- Nickles, T., 1984, “Positive Science and Discoverability”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984: 13–27.
- –––, 1985, “Beyond Divorce: Current Status of the Discovery Debate”, Philosophy of Science , 52: 177–206.
- –––, 1989, “Truth or Consequences? Generative versus Consequential Justification in Science”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1988, 393–405.
- –––, 2018, “TTT: A Fast Heuristic to New Theories?” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 213–244.
- Pasquale, J.-F. de and Poirier, P. 2016, “Convolution and Modal Representations in Thagard and Stewart’s Neural Theory of Creativity: A Critical Analysis ”, Synthese , 193: 1535–1560
- Paul, E. S. and Kaufman, S. B. (eds.), 2014a, The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.001.0001.
- –––, 2014b, “Introducing: The Philosophy of Creativity”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0001.
- Pietsch, W. 2015, “Aspects of Theory-Ladenness in Data-Intensive Science”, Philosophy of Science 82: 905–916.
- Popper, K., 2002 [1934/1959], The Logic of Scientific Discovery , London and New York: Routledge; original published in German in 1934; first English translation in 1959.
- Pöyhönen, S. 2017, “Value of Cognitive Diversity in Science”, Synthese , 194(11): 4519–4540. doi:10.1007/s11229–016-1147-4
- Pulte, H. 2019, “‘‘Tis Much Better to Do a Little with Certainty’: On the Reception of Newton’s Methodology”, in The Reception of Isaac Newton in Europe , Pulte, H, and Mandelbrote, S. (eds.), Continuum Publishing Corporation, 355–84.
- Reichenbach, H., 1938, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge , Chicago: The University of Chicago Press.
- Richardson, A., 2006, “Freedom in a Scientific Society: Reading the Context of Reichenbach’s Contexts”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 41–54.
- Russell, S. 2021, “Human-Compatible Artificial Intelligence”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N. (eds.), Oxford: Oxford University Press, 4–23
- Schaffer, S., 1986, “Scientific Discoveries and the End of Natural Philosophy”, Social Studies of Science , 16: 387–420.
- –––, 1994, “Making Up Discovery”, in M.A. Boden (ed.), Dimensions of Creativity , Cambridge, MA: MIT Press, 13–51.
- Schaffner, K., 1993, Discovery and Explanation in Biology and Medicine , Chicago: University of Chicago Press.
- –––, 2008 “Theories, Models, and Equations in Biology: The Heuristic Search for Emergent Simplifications in Neurobiology”, Philosophy of Science , 75: 1008–21.
- Schickore, J. and F. Steinle, 2006, Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer.
- Schiller, F.C.S., 1917, “Scientific Discovery and Logical Proof”, in C.J. Singer (ed.), Studies in the History and Method of Science (Volume 1), Oxford: Clarendon, 235–89.
- Simon, H.A., 1973, “Does Scientific Discovery Have a Logic?”, Philosophy of Science , 40: 471–80.
- –––, 1977, Models of Discovery and Other Topics in the Methods of Science , Dordrecht: D. Reidel.
- Simon, H.A., P.W. Langley, and G.L. Bradshaw, 1981, “Scientific Discovery as Problem Solving”, Synthese , 47: 1–28.
- Smith, G.E., 2002, “The Methodology of the Principia ”, in G.E. Smith and I.B. Cohen (eds), The Cambridge Companion to Newton , Cambridge: Cambridge University Press, 138–73.
- Simonton, D. K., “Hierarchies of Creative Domains: Disciplinary Constraints on Blind Variation and Selective Retention”, in Paul, E. S. and Kaufman, S. B. (eds), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0013
- Snyder, L.J., 1997, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
- Solomon, M., 2009, “Standpoint and Creativity”, Hypatia : 226–37.
- Sternberg, R J. and T. I. Lubart, 1999, “The concept of creativity: Prospects and paradigms,” in R. J. Sternberg (ed.) Handbook of Creativity , Cambridge: Cambridge University Press, 3–15.
- Stokes, D., 2011, “Minimally Creative Thought”, Metaphilosophy , 42: 658–81.
- Tamaddoni-Nezhad, A., Bohan, D., Afroozi Milani, G., Raybould, A., Muggleton, S., 2021, “Human–Machine Scientific Discovery”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N., (eds.), Oxford: Oxford University Press, 297–315
- Thagard, P., 1984, “Conceptual Combination and Scientific Discovery”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984(1): 3–12.
- –––, 1999, How Scientists Explain Disease , Princeton: Princeton University Press.
- –––, 2010, “How Brains Make Mental Models”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Science & Technology , Berlin and Heidelberg: Springer, 447–61.
- –––, 2012, The Cognitive Science of Science , Cambridge, MA: MIT Press.
- Thagard, P. and Stewart, T. C., 2011, “The AHA! Experience: Creativity Through Emergent Binding in Neural Networks”, Cognitive Science , 35: 1–33.
- Thoma, Johanna, 2015, “The Epistemic Division of Labor Revisited”, Philosophy of Science , 82: 454–472. doi:10.1086/681768
- Weber, M., 2005, Philosophy of Experimental Biology , Cambridge: Cambridge University Press.
- Whewell, W., 1996 [1840], The Philosophy of the Inductive Sciences (Volume II), London: Routledge/Thoemmes.
- Weisberg, M. and Muldoon, R., 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76: 225–252. doi:10.1086/644786
- Williams, K. et al. 2015, “Cheaper Faster Drug Development Validated by the Repositioning of Drugs against Neglected Tropical Diseases”, Journal of the Royal Society Interface 12: 20141289. http://dx.doi.org/10.1098/rsif.2014.1289.
- Zahar, E., 1983, “Logic of Discovery or Psychology of Invention?”, British Journal for the Philosophy of Science , 34: 243–61.
- Zednik, C. and Jäkel, F. 2016 “Bayesian Reverse-Engineering Considered as a Research Strategy for Cognitive Science”, Synthese , 193, 3951–3985.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
[Please contact the author with suggestions.]
abduction | analogy and analogical reasoning | cognitive science | epistemology: social | knowledge: analysis of | Kuhn, Thomas | models in science | Newton, Isaac: Philosophiae Naturalis Principia Mathematica | Popper, Karl | rationality: historicist theories of | scientific method | scientific research and big data | Whewell, William
Copyright © 2022 by Jutta Schickore < jschicko @ indiana . edu >
- Accessibility
Support SEP
Mirror sites.
View this site from another server:
- Info about mirror sites
The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
Overview of the Research Process
- First Online: 01 January 2012
Cite this chapter
- Phyllis G. Supino EdD 3
6441 Accesses
2 Citations
1 Altmetric
Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable. Research can be classified according to a variety of dimensions: basic, applied, or translational; hypothesis generating or hypothesis testing; retrospective or prospective; longitudinal or cross-sectional; observational or experimental; and quantitative or qualitative. The ultimate success of a research project is heavily dependent on adequate planning.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
- Durable hardcover edition
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Similar content being viewed by others
Research: Meaning and Purpose
The Roadmap to Research: Fundamentals of a Multifaceted Research Process
Research Questions and Research Design
Calvert J, Martin BR (2001) Changing conceptions of basic research? Brighton, England: Background document for the Workshop on Policy Relevance and Measurement of Basic Research, Oslo, 29–30 Oct 2001. Brighton, England: SPRU.
Google Scholar
Leedy PD. Practical research. Planning and design. 6th ed. Upper Saddle River: Prentice Hall; 1997.
Tuckman BW. Conducting educational research. 3rd ed. New York: Harcourt Brace Jovanovich; 1972.
Tanenbaum SJ. Knowing and acting in medical practice. The epistemological policies of outcomes research. J Health Polit Policy Law. 1994;19:27–44.
Article PubMed CAS Google Scholar
Richardson WS. We should overcome the barriers to evidence-based clinical diagnosis! J Clin Epidemiol. 2007;60:217–27.
Article PubMed Google Scholar
MacCorquodale K, Meehl PE. On a distinction between hypothetical constructs and intervening variables. Psych Rev. 1948;55:95–107.
Article CAS Google Scholar
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research: The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Washington: DHEW Publication No. (OS) 78–0012, Appendix I, DHEW Publication No. (OS) 78–0013, Appendix II, DHEW Publication (OS) 780014; 1978.
Coryn CLS. The fundamental characteristics of research. J Multidisciplinary Eval. 2006;3:124–33.
Smith NL, Brandon PR. Fundamental issues in evaluation. New York: Guilford; 2008.
Committee on Criteria for Federal Support of Research and Development, National Academy of Sciences, National Academy of Engineering, Institute of Medicine, National Research Council. Allocating federal funds for science and technology. Washington, DC: The National Academies; 1995.
Busse R, Fleming I. A critical look at cardiovascular translational research. Am J Physiol Heart Circ Physiol. 1999;277:H1655–60.
CAS Google Scholar
Schuster DP, Powers WJ. Translational and experimental clinical research. Philadelphia: Lippincott, Williams & Williams; 2005.
Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–21.
Robertson D, Williams GH. Clinical and translational science: principles of human research. London: Elsevier; 2009.
Goldblatt EM, Lee WH. From bench to bedside: the growing use of translational research in cancer medicine. Am J Transl Res. 2010;2:1–18.
PubMed Google Scholar
Milloy SJ. Science without sense: the risky business of public health research. In: Chapter 5, Mining for statistical associations. Cato Institute. 2009. http://www.junkscience.com/news/sws/sws-chapter5.html . Retrieved 29 Oct 2009.
Gawande A. The cancer-cluster myth. The New Yorker, 8 Feb 1999, p. 34–37.
Kerlinger F. [Chapter 2: problems and hypotheses]. In: Foundations of behavioral research 3rd edn. Orlando: Harcourt, Brace; 1986.
Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. Epub 2005 Aug 30.
Andersen B. Methodological errors in medical research. Oxford: Blackwell Scientific Publications; 1990.
DeAngelis C. An introduction to clinical research. New York: Oxford University Press; 1990.
Hennekens CH, Buring JE. Epidemiology in medicine. 1st ed. Boston: Little Brown; 1987.
Jekel JF. Epidemiology, biostatistics, and preventive medicine. 3rd ed. Philadelphia: Saunders Elsevier; 2007.
Hess DR. Retrospective studies and chart reviews. Respir Care. 2004;49:1171–4.
Wissow L, Pascoe J. Types of research models and methods (chapter four). In: An introduction to clinical research. New York: Oxford University Press; 1990.
Bacchieri A, Della Cioppa G. Fundamentals of clinical research: bridging medicine, statistics and operations. Milan: Springer; 2007.
Wood MJ, Ross-Kerr JC. Basic steps in planning nursing research. From question to proposal. 6th ed. Boston: Jones and Barlett; 2005.
DeVita VT, Lawrence TS, Rosenberg SA, Weinberg RA, DePinho RA. Cancer. Principles and practice of oncology, vol. 1. Philadelphia: Wolters Klewer/Lippincott Williams & Wilkins; 2008.
Portney LG, Watkins MP. Foundations of clinical research. Applications to practice. 2nd ed. Upper Saddle River: Prentice Hall Health; 2000.
Marks RG. Designing a research project. The basics of biomedical research methodology. Belmont: Lifetime Learning Publications: A division of Wadsworth; 1982.
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72.
Download references
Author information
Authors and affiliations.
Department of Medicine, College of Medicine, SUNY Downstate Medical Center, 450 Clarkson Avenue, 1199, Brooklyn, NY, 11203, USA
Phyllis G. Supino EdD
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Phyllis G. Supino EdD .
Editor information
Editors and affiliations.
, Cardiovascular Medicine, SUNY Downstate Medical Center, Clarkson Avenue, box 1199 450, Brooklyn, 11203, USA
Phyllis G. Supino
, Cardiovascualr Medicine, SUNY Downstate Medical Center, Clarkson Avenue 450, Brooklyn, 11203, USA
Jeffrey S. Borer
Rights and permissions
Reprints and permissions
Copyright information
© 2012 Springer Science+Business Media, LLC
About this chapter
Supino, P.G. (2012). Overview of the Research Process. In: Supino, P., Borer, J. (eds) Principles of Research Methodology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3360-6_1
Download citation
DOI : https://doi.org/10.1007/978-1-4614-3360-6_1
Published : 18 April 2012
Publisher Name : Springer, New York, NY
Print ISBN : 978-1-4614-3359-0
Online ISBN : 978-1-4614-3360-6
eBook Packages : Medicine Medicine (R0)
Share this chapter
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
Methodology
- Exploratory Research | Definition, Guide, & Examples
Exploratory Research | Definition, Guide, & Examples
Published on December 6, 2021 by Tegan George . Revised on November 20, 2023.
Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth.
Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive research or a grounded theory approach due to its flexible and open-ended nature.
Table of contents
When to use exploratory research, exploratory research questions, exploratory research data collection, step-by-step example of exploratory research, exploratory vs. explanatory research, advantages and disadvantages of exploratory research, other interesting articles, frequently asked questions about exploratory research.
Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.
You can use this type of research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.
Receive feedback on language, structure, and formatting
Professional editors proofread and edit your paper by focusing on:
- Academic style
- Vague sentences
- Style consistency
See an example
Exploratory research questions are designed to help you understand more about a particular topic of interest. They can help you connect ideas to understand the groundwork of your analysis without adding any preconceived notions or assumptions yet.
Here are some examples:
- What effect does using a digital notebook have on the attention span of middle schoolers?
- What factors influence mental health in undergraduates?
- What outcomes are associated with an authoritative parenting style?
- In what ways does the presence of a non-native accent affect intelligibility?
- How can the use of a grocery delivery service reduce food waste in single-person households?
Collecting information on a previously unexplored topic can be challenging. Exploratory research can help you narrow down your topic and formulate a clear hypothesis and problem statement , as well as giving you the “lay of the land” on your topic.
Data collection using exploratory research is often divided into primary and secondary research methods, with data analysis following the same model.
Primary research
In primary research, your data is collected directly from primary sources : your participants. There is a variety of ways to collect primary data.
Some examples include:
- Survey methodology: Sending a survey out to the student body asking them if they would eat vegan meals
- Focus groups: Compiling groups of 8–10 students and discussing what they think of vegan options for dining hall food
- Interviews: Interviewing students entering and exiting the dining hall, asking if they would eat vegan meals
Secondary research
In secondary research, your data is collected from preexisting primary research, such as experiments or surveys.
Some other examples include:
- Case studies : Health of an all-vegan diet
- Literature reviews : Preexisting research about students’ eating habits and how they have changed over time
- Online polls, surveys, blog posts, or interviews; social media: Have other schools done something similar?
For some subjects, it’s possible to use large- n government data, such as the decennial census or yearly American Community Survey (ACS) open-source data.
How you proceed with your exploratory research design depends on the research method you choose to collect your data. In most cases, you will follow five steps.
We’ll walk you through the steps using the following example.
Therefore, you would like to focus on improving intelligibility instead of reducing the learner’s accent.
Step 1: Identify your problem
The first step in conducting exploratory research is identifying what the problem is and whether this type of research is the right avenue for you to pursue. Remember that exploratory research is most advantageous when you are investigating a previously unexplored problem.
Step 2: Hypothesize a solution
The next step is to come up with a solution to the problem you’re investigating. Formulate a hypothetical statement to guide your research.
Step 3. Design your methodology
Next, conceptualize your data collection and data analysis methods and write them up in a research design.
Step 4: Collect and analyze data
Next, you proceed with collecting and analyzing your data so you can determine whether your preliminary results are in line with your hypothesis.
In most types of research, you should formulate your hypotheses a priori and refrain from changing them due to the increased risk of Type I errors and data integrity issues. However, in exploratory research, you are allowed to change your hypothesis based on your findings, since you are exploring a previously unexplained phenomenon that could have many explanations.
Step 5: Avenues for future research
Decide if you would like to continue studying your topic. If so, it is likely that you will need to change to another type of research. As exploratory research is often qualitative in nature, you may need to conduct quantitative research with a larger sample size to achieve more generalizable results.
It can be easy to confuse exploratory research with explanatory research. To understand the relationship, it can help to remember that exploratory research lays the groundwork for later explanatory research.
Exploratory research investigates research questions that have not been studied in depth. The preliminary results often lay the groundwork for future analysis.
Explanatory research questions tend to start with “why” or “how”, and the goal is to explain why or how a previously studied phenomenon takes place.
Like any other research design , exploratory studies have their trade-offs: they provide a unique set of benefits but also come with downsides.
- It can be very helpful in narrowing down a challenging or nebulous problem that has not been previously studied.
- It can serve as a great guide for future research, whether your own or another researcher’s. With new and challenging research problems, adding to the body of research in the early stages can be very fulfilling.
- It is very flexible, cost-effective, and open-ended. You are free to proceed however you think is best.
Disadvantages
- It usually lacks conclusive results, and results can be biased or subjective due to a lack of preexisting knowledge on your topic.
- It’s typically not externally valid and generalizable, and it suffers from many of the challenges of qualitative research .
- Since you are not operating within an existing research paradigm, this type of research can be very labor-intensive.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Normal distribution
- Degrees of freedom
- Null hypothesis
- Discourse analysis
- Control groups
- Mixed methods research
- Non-probability sampling
- Quantitative research
- Ecological validity
Research bias
- Rosenthal effect
- Implicit bias
- Cognitive bias
- Selection bias
- Negativity bias
- Status quo bias
Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.
Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.
You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
George, T. (2023, November 20). Exploratory Research | Definition, Guide, & Examples. Scribbr. Retrieved September 24, 2024, from https://www.scribbr.com/methodology/exploratory-research/
Is this article helpful?
Tegan George
Other students also liked, explanatory research | definition, guide, & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research design | types, guide & examples, what is your plagiarism score.
- Career Development
- Find Internship or Research Experience
- Getting started with research
What is Research?
Research is the pursuit of new knowledge through the process of discovery. Scientific research involves diligent inquiry and systematic observation of phenomena. Most scientific research projects involve experimentation, often requiring testing the effect of changing conditions on the results. The conditions under which specific observations are made must be carefully controlled, and records must be meticulously maintained. This ensures that observations and results can be are reproduced. Scientific research can be basic (fundamental) or applied. What is the difference? The National Science Foundation uses the following definitions in its resource surveys:
Basic research:
The objective of basic research is to gain more comprehensive knowledge or understanding of the subject under study, without specific applications in mind. In industry, basic research is defined as research that advances scientific knowledge but does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest.
Applied research:
Applied research is aimed at gaining knowledge or understanding to determine the means by which a specific, recognized need may be met. In industry, applied research includes investigations oriented to discovering new scientific knowledge that has specific commercial objectives with respect to products, processes, or services.
What is research at the undergraduate level?
At the undergraduate level, research is self-directed work under the guidance and supervision of a mentor/advisor ― usually a university professor. A gradual transition towards independence is encouraged as a student gains confidence and is able to work with minor supervision. Students normally participate in an ongoing research project and investigate phenomena of interest to them and their advisor.
- What Can I do with a Major In...
- Career Mapping System
- Personalize Your Career Plan
- Build a Professional Profile
- Internships and Research
- Job Search Resources
- Programs & Events
- Additional Career Resources
- Archived Newsletters
- Applying to Graduate School
- Reporting Your Post-Grad Plans
- Post-Graduation Data
- Alumni Profiles
Purdue University College of Science, 475 Stadium Mall Drive, West Lafayette, IN 47907 • Phone: (765) 494-1729, Fax: (765) 494-1736
Purdue University Indianapolis, 723 W. Michigan St., Indianapolis, IN 46202
Student Advising Office: (765) 494-1771, Fax: (765) 496-3015 • Science IT : (765) 494-4488
© 2024 Purdue University | An equal access/equal opportunity university | Copyright Complaints | DOE Degree Scorecards
Trouble with this page? Accessibility issues ? Please contact the College of Science .
- More from M-W
- To save this word, you'll need to log in. Log In
Definition of research
(Entry 1 of 2)
Definition of research (Entry 2 of 2)
transitive verb
intransitive verb
- disquisition
- examination
- exploration
- inquisition
- investigation
- delve (into)
- inquire (into)
- investigate
- look (into)
Examples of research in a Sentence
These examples are programmatically compiled from various online sources to illustrate current usage of the word 'research.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.
Word History
Middle French recerche , from recercher to go about seeking, from Old French recerchier , from re- + cerchier, sercher to search — more at search
1577, in the meaning defined at sense 3
1588, in the meaning defined at transitive sense 1
Phrases Containing research
- marketing research
- market research
- operations research
- oppo research
research and development
- research park
- translational research
Dictionary Entries Near research
Cite this entry.
“Research.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/research. Accessed 25 Sep. 2024.
Kids Definition
Kids definition of research.
Kids Definition of research (Entry 2 of 2)
More from Merriam-Webster on research
Nglish: Translation of research for Spanish Speakers
Britannica English: Translation of research for Arabic Speakers
Britannica.com: Encyclopedia article about research
Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!
Can you solve 4 words at once?
Word of the day.
See Definitions and Examples »
Get Word of the Day daily email!
Popular in Grammar & Usage
Every letter is silent, sometimes: a-z list of examples, plural and possessive names: a guide, the difference between 'i.e.' and 'e.g.', more commonly misspelled words, absent letters that are heard anyway, popular in wordplay, weird words for autumn time, 10 words from taylor swift songs (merriam's version), 9 superb owl words, 15 words that used to mean something different, 10 words for lesser-known games and sports, games & quizzes.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- Springer Nature - PMC COVID-19 Collection
Discovery and Preclinical Work
Daria mochly-rosen.
3 Chemical and Systems Biology, Stanford University School of Medicine, 269 Campus Drive, Center for Clinical Science Research Rm 3145a, Stanford, CA 94305-5174 USA
Kevin Grimes
4 Chemical and Systems Biology, Stanford University School of Medicine, 269 Campus Drive, Center for Clinical Science Research Rm 3145c, Stanford, CA 94305-5174 USA
In any drug discovery and development effort, we must accomplish a number of critical steps to arrive at a compound that is safe and efficacious, and also exhibits the complex array of desired drug-like behaviors that warrants advancement to the clinic. These tasks include target identification and validation; screening for active compounds; chemical modification of candidate compounds to achieve optimized pharmacology; formulating the final drug product; and establishing safety in preclinical models. “Repurposing” drugs that have previously been approved (or shown to be safe in humans) for new clinical indications can provide a faster, less risky, and more cost-effective route for bringing a new therapy to patients. Such shortcuts in development can be particularly valuable to resource-constrained academicians. When performing drug discovery research, we must be particularly attentive to the robustness of our experiments, because inability to reproduce academic data continues to be a sticking point when projects are transferred to industry. Our experiments must be appropriately blinded, statistically powered, and meticulously documented so that our findings are worthy of the large investment required for their further translation into a drug. This chapter walks through the essential preclinical drug development steps that lead to a clinical drug candidate.
Contributor Information
Daria Mochly-Rosen, Phone: +1(650) 724-8098, Email: ude.drofnats@ylhcom .
Kevin Grimes, Phone: +1(650) 721-6185, Email: ude.drofnats@semirgk .
Daria Mochly-Rosen, Email: ude.drofnats@demkraps .
Kevin Grimes, Email: ude.drofnats@semirgk .
- A Practical Guide to Drug Development in Academia. 2014 : 31–77.
Robustness of Preclinical Studies
Chemical and Systems Biology, Stanford University School of Medicine, 269 Campus Drive, Center for Clinical Science Research Rm 3145a, Stanford, CA 94305-5174 USA
A number of recent commentaries challenge the robustness of academic preclinical studies. In one report, only 11% of published preclinical cancer studies from academic labs could be reproduced by Amgen scientists. This low rate was despite cooperation of the academic scientist who reported the original findings to reproduce the work at or with Amgen [ 1 ]. In another report, Bayer scientists found that ~75% of published academic studies brought in-house could not be reproduced, which resulted in termination of the effort to develop therapeutics based on these academic findings [ 2 ]. So what is going on?
The following discussion focuses on academic data related to animal studies. I will not repeat here the discussions of the importance of using the right animal models, how to confirm the findings using patient specimens, how to rely on proper understanding of pharmacokinetics and pharmacodynamics in using animal models, and how to the use proper “endpoints” for the studies. All these issues are discussed in later sections of this chapter. Instead, I focus on factors that may contribute to irreproducible animal data published by academicians and some simple measures to mitigate these issues.
Box 2.1: What Surprised an Academician?
In 2004, when I temporary moved from my academic lab to serve as the CSO of KAI Pharmaceuticals, I was hurt when our then CEO, who holds a B.A. in History, told me, “You will now learn that your academic work is not as robust as industry’s standard.” Like you, I take a great pride in our work in academia. I felt that conducting blinded studies, using several species, and reproducing the work in independent labs all combined to ensure high quality and valid data. That was not enough, I quickly learned. –DM-R
Box 2.2: Key Terms and Abbreviations
CSO: Chief Scientific Officer
Preclinical animal studies: animal studies done to validate a disease target and test the performance of a molecule prior to moving into human testing
p -value: a statistical measure of the probability of obtaining a result at least as extreme as the one observed. If the p -value is less than the significance level (usually 0.05 or 0.01), one rejects the null hypothesis that there is no treatment effect
CDER: Center for Drug Evaluation and Research, within the Food and Drug Administration
Endpoints: measurements (e.g., weight or tumor size) or observations (e.g., motor control or healthiness) used in a study to evaluate the effectiveness or safety of a treatment
Orphan indication: an FDA designation of a disease or condition that affects less than 200,000 people per year in the USA or for a treatment that is not expected to recoup its R&D costs due to pricing constraints
“me-toos”: drugs that are approved after other chemically similar compounds or molecules with the same mechanism of action are already on the market
Factors that Contribute to Irreproducible Data
Heterogeneous experimental conditions.
Animal studies can be greatly affected by many factors. Yet, often we do not give proper attention to these potentially confounding factors and/or we do not record the conditions used in detail. For example, rodents are nocturnal animals. Data related to their immune response, eating, exercise, ability to learn tasks, etc. are greatly affected by the time of day when the experiment is conducted. The chow feed is another important variable that can affect animal-derived data; some feed is rich in soy and therefore contributes feminizing hormones to both males and females. Variation in the feed may affect response to drug uptake and metabolism, to the integrity of the immune response, etc. Other confounding factors relate to the housing conditions, including noise, strong smells, and crowding; and a good animal facility should minimize them. Latent or full-blown infection by viruses, bacteria, mites, and other parasites can also affect the results of the study (See Box 2.3). All these variables should be held to a minimum and detailed information should be recorded so that even if there is no room to provide it in full in the publication, we will be able to share the specific conditions used during our study when contacted by a commercial entity or another academic laboratory.
Box 2.3: Lack of Reproducibility May Relate to Previously Unsuspected Confounding Factors
Lack of reproducibility of preclinical reports does not mean that the data are fabricated or wrong. One of the better-documented cases of inability to reproduce data in mice relates to the induction of type I diabetes in NOD mice. Initial claims attributed increased diabetes incidence reported by some groups to the difference in housing the mice in germ-free conditions. However, more recent data showed that intestinal microbiota are the critical confounding factor; presence of Bacillus cereus in the gut delayed onset, and reduced incidence of type-1 diabetes [ 3 ].
Bias and Incomplete Reporting
It is critical that the investigators who assess the animal data will be blinded to the experimental conditions; unintended bias can greatly affect the analysis, especially when the endpoint determinants are subjective.
Another problem with bias results from dismissing and not reporting negative or inconsistent data. The investigator may have a reasonable rationale for wanting to exclude data related to certain animals; we should include the rationale in the method section and let the readers draw their own conclusions. All the data (positive and negative) should be reported, as they may often help identify important variables to consider in human studies. For example, the observation that gender and age can affect the therapeutic response to drugs in models of heart attack in animals was not reported for a long time. When these findings were finally reported, reviewers started requesting preclinical studies include animals from both genders.
Box 2.4: Recommendations to Improve Robustness of Preclinical Studies (Expanded from Ref. [ 4 ])
- Keep detailed information about the experimental conditions.
- Keep detailed information on the source of all the reagents and lot numbers used in the study.
- Seek advice of statisticians during the study design to ensure that the study is powered to address the question at hand, and that the appropriate statistical tests are applied.
- Include appropriate negative controls and—when possible—positive controls for the study.
- Have each study reproduced by another investigator in the lab, and in an independent lab if feasible.
- The investigators should be blinded to the identity of the control and treatment groups during data analysis.
- Provide information on all the animals that were included in the study, those that were excluded from the study and the reasons for the exclusion.
- Validate reagents for the intended application (e.g., selectivity of small molecule, appropriate antibody for immunohistochemistry).
All studies should include both positive and negative controls. For example, a group of animals treated with a drug that was approved for this indication will enable a side-by-side comparison of the benefit of our intervention, as well as confirm that the disease model is relevant. Academics sometimes assume that certain controls are wasteful—“We have done these controls before” is a reasoning we often use. However, the control experiments need to be done side-by-side with the treatment arm, as unexpected factors can contribute to the outcome. A recent investigator in SPARK told us that they omitted an oral gavage of their control subjects before the last blood draw, only to discover later that gavage alone increases neutrophil number in the blood—possibly due to animal stress. Needless to say, the entire study had to be repeated.
It is important that critical experiments are repeated by a different investigator in the same lab to ensure that the experimental protocol is detailed enough to be reproduced by an unbiased researcher. When I first reported on the benefit after heart attack of treating animals with an inhibitor we developed for delta protein kinase C, the benefit was so surprising that one skeptic refused to believe the results. It was good to be able to answer that three members of the lab reproduced the same data. It was even better to be able to report that two other labs reproduced our data, and it was really a coup when that skeptic obtained the same data in his own laboratory.
Insufficient Statistical Power of the Study or Inappropriate Statistical Analysis
To save on animal use, researchers in academia often use too few animals per treatment group. Unfortunately, a p -value smaller than 0.05, although significant, is not robust enough if the study was done with 5 or less animals per treatment group.
If you are like me, you contact a statistician only when you try to analyze the data. A recent commentary urges academicians to recognize the critical contribution of statisticians in preclinical research [ 5 ]. Statisticians should be engaged early during the study planning to ensure that the number of animals included is sufficient and that the study is powered to provide an unequivocal answer. This will not only ease the review process, but importantly will increase the rigor of the study. Let us not have our budget dictate the number of animals per group we use, or we risk sacrificing the robustness of our results!
Biostatisticians can also weigh in on the appropriateness of the statistical tests used to analyze the results. Often, there is more than one statistical test available to compare groups, but characteristics of the data (e.g., size, distribution, etc.) may make some tests inappropriate. For example, we should not use a t -test on nonparametric data.
Given the Amgen and Bayer reproducibility studies, should we even attempt to do preclinical work in academia? Let us not throw the baby out with the bath water. Academic research provides essential fuel for new drug development, in general, and for orphan indications in particular. In a recent analysis of 252 drugs approved by CDER between 1998 and 2007, only 47% were considered scientifically novel; and academic discoveries contributed to a third of those novel molecules [ 6 ]. In addition, of drugs approved for orphan indications during that period, almost 50% were based on academic discoveries. So academic research is an important engine for innovation in drug discovery. Nevertheless, as Begley and Ellis conclude, the bar for reproducibility in performing and presenting preclinical studies must be raised. More rigorous preclinical research in academia will reduce waste of research and money in industry, thus leading to a cheaper drug discovery effort and a benefit to patients.
Box 2.5: The Bottom Line
The bar for reproducibility in performing and presenting preclinical studies carried out by academic scientists must be raised, lest innovative academic work go unnoticed by industry partners.
Repurposing Drugs
Chemical and Systems Biology, Stanford University School of Medicine, 269 Campus Drive, Center for Clinical Science Research Rm 3145c, Stanford, CA 94305-5174 USA
Drug repurposing (also called drug repositioning) refers to the practice of developing an existing drug for a new clinical indication. Typically, a drug selected for repurposing has been tested extensively in humans and has a known safety profile. The drug may have received regulatory approval for its original indication or may have stalled in development, perhaps for lack of efficacy or an unacceptable toxicity profile for a nonserious clinical indication.
Repurposing can be a faster, less risky, and more cost-effective route to benefit patients and is therefore particularly attractive for academics and other not-for-profit drug developers. Pharmaceutical companies, biotechnology companies, and health care investors are often less enthusiastic about supporting the development of a repurposed drug because the active compound is typically not patentable. Nonetheless, proprietary claims regarding formulation, dosing, or clinical indication may allow a period of exclusive marketing and lead to a profitable program. The repurposing of the teratogenic sedative thalidomide for the treatment of multiple myeloma is an example of the profitable exploitation of a drug whose patent had long ago expired.
While physicians often prescribe drugs for “off label” uses when caring for individual patients, a drug repurposing development program for a novel indication will require clinical human experimentation and, therefore, approval of your Institutional Review Board (IRB). Advancing a repurposed compound to clinical study may also require the filing of an Investigational New Drug application (IND) with the US Food and Drug Administration (FDA) or relevant national regulatory agency (if the clinical studies will be conducted outside of the USA).
Drug studies typically require a new IND if the research will be reported to the FDA in support of a marketing claim for the new indication, i.e., a new drug label, or if the research involves a “route of administration or dosage level or use in a patient population or other factor that significantly increases the risks (or decreases the acceptability of the risks) associated with the use of the drug product” [ 7 ]. When in doubt, check with your institution’s legal or compliance office or directly with the FDA.
Box 2.6: Key Terms and Abbreviations
Repurposing: finding a new indication, formulation or route of administration for an existing drug
Off-label: indications not listed on the drug label (and therefore not evaluated by the FDA)
IRB (Institutional Review Board): a committee formally designated by an institution to review, approve the initiation of, and conduct periodic reviews of biomedical research involving human subjects
IND: Investigational New Drug application; document filed with the FDA prior to initiating research on human subjects using any drug that has not been previously approved for the proposed clinical indication, dosing regimen, or patient population
FDA: Food and Drug Administration
NIH: National Institutes of Health
Drug Master File: a confidential document submitted to the FDA (or national regulatory agency) outlining specifications for the manufacturing, processing, packaging and storing of a therapeutic agent(s)
GLP: Good Laboratory Practice; extensive documentation of each procedural step to ensure high quality, reproducible studies
Pharmacokinetics: measurements of what the body does to a drug (absorption, distribution, metabolism and excretion)
Identifying Repurposing Opportunities
When we have discovered a novel, validated drug target, screening a library of previously approved drugs for activity against our target may lead to a drug repurposing opportunity. Researchers at the US National Institutes of Health (NIH) have assembled a comprehensive list of drugs that have been previously approved by the FDA ( n = 2,356) and by regulatory agencies worldwide ( n = 3,936, inclusive of the FDA). In addition, they have compiled a library of 2,750 of these previously approved drugs and of 4,881 drugs that have undergone human testing, but have not been granted regulatory approval [ 8 ]. Researchers may apply to have the NIH test their targets against this library. Alternatively, many high-throughput screening (HTS) centers now also include a collection of previously approved drugs as a part of their chemical library.
A second path to repurposing is to apply a known modulator of a specific biologic target to a new disease. For example, eflornithine is an inhibitor of ornithine decarboxylase (ODC), a key enzyme in mammalian cells for converting ornithine to polyamines. The polyamines, in turn, are important in cell proliferation, differentiation and growth. Eflornithine stalled in development when it failed to show adequate efficacy as an antitumor agent, but has subsequently been successfully redirected as a treatment for African sleeping sickness, since ODC is also present in the causative parasite.
A third avenue for identifying repurposing opportunities is through astute clinical observation and exploitation of known or unanticipated side effects. For example, erythromycin is well known for causing gastrointestinal distress and diarrhea. This observation had led to its clinical use as a promotility agent in selected patients with a functional, non-obstructive ileus. Similarly, sildenafil originally entered clinical development as an anti-angina/antihypertensive agent. A serendipitous clinical observation led to its development as a treatment for erectile dysfunction—an extremely lucrative market opportunity.
The following sections will discuss the repurposing of drugs based upon the drug’s regulatory status, patent status, and intended indication, dose, and route of indication. In general, the regulatory agencies will focus first and foremost on the safety of the proposed dosage and formulation in the new patient population. Of course, we must also show efficacy to gain regulatory marketing approval.
Previously Approved Drugs Using the Same (or Lower) Dose and Route of Administration
This category presents the fastest route to the clinic. If the drug is generically available and the intended patient population is not at increased safety risk, there are relatively few barriers to conducting a clinical study and publishing the results. Of course, we will require IRB approval prior to initiating the study. Once we publish our study results, physicians will be free to prescribe the drug off-label without a formal regulatory approval for the new indication. If there is reason to suspect increased risk or that the known drug risks are less acceptable for the intended indication and study population, we must file an IND.
If the drug is proprietary, we should consider approaching the company that markets the drug to solicit support for our study. Depending upon the size of the current market and the number of years remaining on the patents, the company may see our repurposing proposal as either an opportunity or a threat. Our proposed new market may represent an attractive pipeline extension. On the other hand, unanticipated negative adverse effects in the clinical study may threaten the existing franchise. If an IND is required, we must have the company’s approval for the FDA to access their proprietary Drug Master File at the agency; thus, company consent is required. If an IND is not required, we may proceed with our study, even without the company’s consent, assuming that we have obtained IRB approval and have adequate financial resources.
Working with the company can provide many advantages beyond financial support or free study drug. The company scientists will have an extensive working knowledge of the drug’s metabolism, formulation, side effects, and potential drug–drug interactions. This information can be invaluable in the design and execution of the new clinical study.
New Route of Administration, Dosing, or Formulation
Regulatory agencies require that a drug be both safe and efficacious. When a drug is administered via a different route (e.g., via inhalation instead of intravenously), at higher dosages, or in a new formulation, the safety profile will be altered and human efficacy will be unproven. Therefore, an IND will be required.
Although prior human experience with the drug can be predictive and help guide preclinical studies, supplemental GLP safety studies will typically be required to determine that the route, dose, or formulation is safe to test in humans. At a minimum, preclinical studies should be conducted to assess safety and characterize pharmacokinetics for the new formulation and/or route of administration. Non-GLP preclinical efficacy studies can be useful in demonstrating biological effect and predicting the clinical dosing requirements. An open discussion with the regulatory agency early in the course of development can be invaluable in determining which preclinical studies will be required prior to entering clinical study.
Non-approved Drug with Human Trial Data
A number of drugs fail to advance beyond their initial phase 2 or 3 clinical study because of lack of efficacy for their intended clinical indication. These drugs are typically “shelved” by the sponsoring company, but can be very valuable if a new target or clinical indication can be identified. The timeline for developing a “shelved” drug for a new indication can be appreciably shortened and less costly because the company sponsor already has a complete preclinical package, human safety data, and a Drug Master File with the FDA (or similar regulatory agency). Often, clinical grade drug product is also available if it still meets its quality specifications. Typically, we must work with the original company sponsor because the drug is under patent protection and/or all the previous data filed with the regulatory agency is proprietary and owned by the company. The US NIH has recently announced an industry/government collaboration program that provides access to academicians to test such compounds [ 9 ].
Box 2.7: The Bottom Line
Drug repositioning can be a faster, less risky, and less expensive route to develop a new therapy for a clinical indication. Repurposing is particularly attractive for academics and other not-for-profit drug developers who are seeking cures for patients, but have limited financial resources. Some repurposing programs can be quite successful commercially if they have intellectual property claims that block competitors or privileged regulatory status (e.g., orphan disease designation).
Developing Assays for High-Throughput Screening (HTS)
Stanford University School of Medicine, Stanford, CA USA
The aim of HTS of chemical libraries is to identify small molecules (chemical leads) that hit or affect a protein target or cellular phenotype. The screen typically identifies good starting chemical entities that will be improved upon (optimized) using medicinal chemistry. There are alternative approaches for identifying small libraries of chemical leads, such as searching the published literature (including patents) or screening substrate or transition state analogs. In silico and fragment-based screening are also options for screening large libraries of molecules, but these methods require prior target structure elucidation and high assay sensitivity, in the case of fragment-based screening. Here we focus on the development of assays for identifying and characterizing active compounds from large (>100,000 compounds) drug-like molecule libraries using HTS.
What is unique about HTS? It relies on robust, miniaturized, “mix and measure” assays. A robust assay is one with a high Z ′-factor [ 10 ], good reproducibility between runs, and resistance to interference. With the large compound libraries typically screened via HTS, cost and logistics often dictate that only a single well per compound be run. Thus, even with a high Z ′-factor, there are considerable opportunities for false positive (non-reproducible) and false negative (missed actives) results due to random variation. This should be considered when designing, optimizing, and characterizing the primary screening assay. Often, multiple iterations of assay design and testing are required to adapt a low-throughput (<50 samples) assay for optimal performance in HTS.
The typical HTS workflow can be broken into the following steps (Fig. 2.1 ):
- Procuring or scaling up production of the reagents (e.g., proteins or cells, substrates, solvents, reporters)
- Developing the assay, including miniaturization
- Assay optimization (e.g., Z ′-factor, reproducibility, sensitivity)
- Characterization of the optimized assay (e.g., sensitivity to time and temperature, linear range)
- Pilot screen with triplicate runs of a small selection of the compound library
- Primary HTS
- Selection of actives and cherry-picking samples
- Confirmation testing
- Compound structure-based clustering
- Confirmation of hits, evaluating the purity and identity of selected actives using LC-MS followed by NMR, and confirming activity in secondary assays
General high-throughput screen workflow
HTS assays are typically run in 2–30 μl in 384 or 1,536 well microtiter plates, although some assays resist miniaturization beyond 96 well plates. The choice of assay technology is often dependent upon the detection equipment available, cost of reagents (particularly for a screen of a large library of compounds), stability of the reagents, ease of use, and the potential for assay technology-dependent false positives.
Box 2.8: Key Terms and Abbreviations
Chemical hit: small molecule that affected the target or phenotype
HTS: high-throughput screening
Optimization: medicinal chemistry effort to improve the properties of a chemical lead
Mix and measure assay: an assay that does not require washing away any of its components
Z ′-factor: measure of assay signal relative to noise
Competitive inhibitor: molecule that binds to the target enzyme and excludes substrate binding (and vice versa)
Uncompetitive inhibitor: molecule that binds only to the target enzyme-substrate complex
Noncompetitive inhibitor: molecule that binds to the target enzyme independent of substrate binding
Edge-effect: situation in which outside wells of a multi-well plate have a bias toward different values than the rest of the plate
K m (Michaelis constant): substrate concentration at which an enzymatic reaction rate is ½ of the maximal reaction rate. K m is a way to characterize the enzyme’s affinity for the substrate.
Once an assay technology is chosen, assay design and optimization involves tradeoffs between assay sensitivity to compounds, Z ′-factor, and cost. If cost is not a consideration, one can often add large amounts of detection reagent and get both an enhanced Z ′-factor and an increased sensitivity to inhibition by compound. In practice, especially for an academic effort, cost is an important consideration. For enzyme assays, the choice of substrate concentration (relative to K m ) will affect the type of inhibitors or activators that are identified. Running the assay with the starting substrate concentration equal to K m will give the best overall sensitivity to competitive, uncompetitive, and noncompetitive inhibitors [ 11 ]. Unlike most assays designed to study enzyme kinetics, HTS assays often allow substrate conversion to proceed to around 50%, since this produces much better signal/noise at a loss of only ~1.4-fold in sensitivity to competitive compound inhibition [ 12 ]. Phenotypic screens use a biological response (e.g., cell death, protein translocation) to report compound activity. Because phenotypic responses reflect a complex biological cascade, they can be more accurate readouts of the therapeutic potential of a molecule. Confirmation via secondary assays, however, can be more difficult, as the compound target in a phenotypic screen may be unknown.
Box 2.9: Z ′-Factor Defined
The Z ′-factor reports the statistical effect size of the difference between an assay’s signal (positive control) and noise (negative control). Good HTS assays have a Z ′-factor between 0.5 and 1.
Where data follows a normal distribution and:
σ p : standard deviation of positive control replicates
σ n : standard deviation of negative control replicates
μ p : mean of the positive control replicates
μ n : mean of the negative control replicates
Once the assay has been developed, it often will require optimization to obtain an adequately high Z ′-factor and robustness. This is particularly true if the assay suffers from “edge-effects,” a situation where the outside wells (in a control plate) have a bias toward different values than the rest of the plate. This can be caused by differences in temperature (plates warm up from the outside), evaporation, and in the case of plated cell-based assays, differential cell growth. It can take considerable experimental effort to identify the cause(s) of the artifact and redesign the assay to minimize its effects. As an example, for a thermal gradient edge effect, a long incubation with a lower enzyme concentration might replace a short incubation to allow time for thermal equilibration before assay readout.
Box 2.10: What Surprised an Academician?
Nearly all high-throughput screens identify reproducible (i.e., not produced by variance) false positives. This is why it is so important to have secondary assays with different reporters to confirm hits.
During assay optimization, the assay conditions should be characterized with regard to linearity with the concentration of the target protein (e.g., binding and enzyme assays), linearity with time, stability of the reagents on the assay equipment (necessary because of the time required for assay runs), solvent (typically DMSO), sensitivity, and pharmacology (if suitable standards are available). If it doesn’t interfere with the assay, it is advisable to use fairly high concentrations (5–10% v/v) of DMSO, since this tends to increase the solubility of many compounds. However, cell-based assays are typically relatively sensitive to DMSO, with the limit often being at 0.5–2% v/v.
After assay optimization, the assay protocol is “frozen” and a pilot screen is run to rigorously test whether the assay is ready for HTS. Three identical sets of compound plates (typically several thousand unique compounds) are run through the assay, one set (in randomized plate order) per run. The data are analyzed using analysis of variance to determine the sizes of the systematic errors due to plate order, plate row, plate column, etc. Ideally the variance is almost all “random,” with only very small contributions from systematic errors.
All HTS assay designs result in the identification of reproducible (i.e., not produced by variance) false positives. These can result from compound interference with the assay readout or from undesirable modes of interaction with the target. Examples include reactivity of the test compound leading to covalent modification of the target, or compounds that inhibit the detection of a reporter gene directly; a common concern in a luciferase-based assay. Thus, it is essential to develop additional independent assays to validate the hits (active compounds) from the primary screen or, if that is not possible, to eliminate potential mechanisms producing false positives.
These validation assays should seek to answer the following questions:
- Does the compound interact directly and reversibly with the molecular target, and with reasonable stoichiometry?
- Does the reported structure of the active compound match what is in the well? Is it reasonably (>90%) pure? If the compound is <99% pure, is the activity quantitatively the same after purification or resynthesis?
- Does the compound interfere directly with the reporter readout used?
- Is compound activity quantitatively reproducible using a different assay technology (e.g., cell-based versus in vitro )?
- Is the activity reversible after washout? (The relationship between potency and expected off-rate should be considered.)
- Is there evidence of a structure–activity relationship (SAR) for the active compounds? Are there related inactive compounds in the library?
- Is the compound just generally reactive under the assay conditions? This can be assessed by comparing compound activity before and after incubation with potential target moieties (e.g., 5 mM lysine dissolved in assay buffer).
Following these steps should result in a well-characterized primary screening assay and a set of secondary assays suitable for a HTS campaign in academia or one of the NIH Molecular Libraries Probe Production Centers Network.
Box 2.11: Recommendations
For a biochemical HTS assay, substrate concentration should be equal to K m to help identify competitive, uncompetitive, and noncompetitive inhibitors. Different conditions may be required to identify activators (depending on the sensitivity of the assay). Usually 50% of the substrate should be converted to product for optimal signal/noise.
For cell-based assays, the percentage of organic solvents should be minimized and solvent-alone should be run as a control during assay development. Live cell imaging can be particularly challenging for large libraries unless the microscope is also in a temperature, % CO 2 , and humidity-controlled environment.
Box 2.12: Key Web Sites
NIH Molecular Libraries Program
http://commonfund.nih.gov/molecularlibraries/overview.aspx
Lilly/NCGC Assay Guidance Manual
http://www.ncgc.nih.gov/guidance/manual_toc.html
Society for Laboratory Automation and Screening
http://www.slas.org/
Journal of Biomolecular Screening
http://jbx.sagepub.com/
Medicinal Chemistry and Lead Optimization
Daniel a. erlanson.
Carmot Therapeutics, Inc., San Francisco, CA USA
Lead optimization means taking a small molecule with promising properties and transforming this “hit” into a drug. It is like molecular sculpture, but instead of developing an aesthetically pleasing statue (which sometimes occurs), the aim is to construct a safe and effective molecule for treating a specific disease. And instead of chisels and plaster, practitioners—medicinal chemists—apply the tools of chemical synthesis.
The previous section covered HTS which, if successful, has generated a hit, a small molecule that has some activity for the target or phenotype of interest. Of course, this hit is likely a long way from being a drug. Improving affinity is often the first task of lead optimization. A drug should be as potent as possible to reduce the cost of production, to minimize the size of the pill or injection needed, and to reduce the potential for off-target effects. Most drugs have IC 50 or EC 50 values (half maximal inhibitory concentration or half maximal effective concentration) around 10 nM or so, with considerable variation to either side. Hits from HTS are sometimes nanomolar potency, but more often low micromolar, which means that binding affinity may need to be improved by several orders of magnitude.
Lead Optimization Considerations
Improved affinity.
Knowing how the molecule binds can generate ideas on how to improve potency. For example, there may be a pocket on the protein near the small molecule, and adding a chemical group (or moiety) to reach this pocket may pick up additional interactions and thus additional binding energy. Alternatively, a structure may reveal an unfavorable contact: perhaps a hydrophobic (water-hating) portion of the ligand is exposed to solvent, or a hydrophilic (water-loving) portion is buried in a greasy hydrophobic part of the protein; the medicinal chemist would make analogs of the molecule without the unfavorable contact and test the activities of the new molecules. Ideally this will lead to better potency, but often changes are less dramatic than expected, and additional molecules will need to be made. This iterative process is called structure-based drug design. In the best cases, it is possible to obtain structural information of how the small molecule binds to the target using experimental techniques such as X-ray crystallography or NMR spectrometry. Failing this, computational modeling can give some idea of the binding mode if the structure of the target is known or is believed to be similar to another characterized target.
It is also possible to do lead optimization in the absence of structure by making somewhat random changes to the molecule and seeing what effects these have on activity. Over the course of several iterations, structure-activity relationships (SARs) emerge. SAR can provide a wealth of knowledge that a medicinal chemist can use to understand the binding mode. Although experimental structural information has become a key tool in medicinal chemistry, it is worth remembering that X-ray crystallography was not sufficiently rapid and general for routine use until the 1980s and 1990s, and even today medicinal chemistry is applied to many targets for which direct structural information is not available, such as most membrane proteins.
Box 2.13: Key Terms and Abbreviations
IC 50 : half maximal inhibitory concentration
EC 50 : half maximal effective concentration
Chemical moiety: a functional group or portion of a molecule
SAR: structure–activity relationships
Lipophilicity: the tendency of a molecule to partition between oil and water
PK: pharmacokinetics
ADME: absorption, distribution, metabolism and excretion
PD: pharmacodynamics
hERG channel: human Ether-à-go-go-Related Gene channel, a potassium ion channel that is important to normal electrical activity of the heart. Inhibition of this channel can lead to sometimes fatal cardiac arrhythmias
CYP: cytochrome P450; a large and diverse group of enzymes that play a major role in drug metabolism
Improved Selectivity
Selectivity is another critical factor in lead optimization. Researchers generally want their drug lead to be active against the target of interest but not active against other proteins. Selectivity is most readily assessed by simply measuring activity of the molecule against other proteins, especially closely related ones, but this can be a daunting task. For example, there are about 500 protein kinases in the human genome, so measuring activity against all or even most of them can get pricey. Fortunately, enough companies have been working in the kinase field that there are now commercial offerings to confirm selectivity against a large number of kinases in a short period of time. However, such selectivity testing for newer classes of targets and enzymes is often not available. Note that selectivity testing within a related family of enzymes or receptors does not rule out the possibility that your compound will bind to a protein outside that family. Before compounds advance into the clinic they are tested against a panel of up to several hundred targets that could cause problems (see below). However, not everything can be tested in vitro , and off-target effects often manifest as side effects and toxicity during in vivo studies.
Improved Physicochemical Properties
Throughout the course of lead optimization, it is important to keep an eye on the physicochemical properties of the molecule such as solubility and lipophilicity (the way it partitions between water and oil or membranes). Solubility, in particular, can be a tricky balancing act because improving potency often involves increasing the size and lipophilicity of a molecule, leading to decreased solubility. Chemists sometimes refer to particularly insoluble compounds as “brick dust.”
Improved Biological Potency
Initial screens are often conducted using pure isolated proteins under highly artificial conditions. Therefore, it is essential that potency be determined in more biologically relevant systems such as whole cell assays; all too often compounds that show activity against the isolated protein will show less or no activity in cells. Sometimes this is due to factors that a medicinal chemist may be able to fix rationally. For example, compounds that are negatively charged can have difficulty crossing cell membranes to interact with targets inside the cell. In other cases, it is unclear why there is a disconnect; in these cases it may be necessary to make more dramatic changes to the lead series, or switch to another series entirely.
Improved Pharmacological Properties
Potency and selectivity are important, but other parameters also require optimization. In fact, a rookie mistake is to focus exclusively on potency. Many things can happen to a drug on its way to its target. This is especially true for oral drugs: the body treats anything coming in through the mouth as food and tries to digest it or, failing that, to excrete it. The study of what happens to a drug in vivo is called pharmacokinetics (PK), which is covered in more detail in Sect. 2.8 . A critical aspect of lead optimization is to measure and improve the ADME (absorption, distribution, metabolism and excretion) properties of a molecule, keeping it in the body for long enough and at sufficient levels to do its job without causing problems. Many of the individual proteins that affect a drug’s path into and through the body are known, and experiments with isolated enzymes, plasma, or liver extracts can be helpful, but ultimately animal studies are essential to understand a molecule’s PK.
Because so many different factors are at play in pharmacokinetics, medicinal chemists often turn to empirically derived rules to try to tune the properties of their molecules. The most famous of these is Chris Lipinski’s Rule of 5, a set of guidelines concerning molecular weight, lipophilicity, and other properties that predict the likelihood a drug candidate will be orally bioavailable [ 13 ]. When performing SAR to optimize PK, often a specific moiety may be prone to metabolism, and by altering this bit of the small molecule the overall stability can be improved. Keep in mind that such rules are not hard cut-offs, but directional guidelines to improve the probability of success.
Target Validation
Pharmacokinetics is sometimes characterized as “what your body does to a drug.” Conversely, pharmacodynamics (PD) can be thought of as “what a drug does to your body.” On a fundamental level, the drug needs to be active against the target of interest.
Unfortunately, it is possible to inhibit or activate a biological target and yet have no effect on the disease of interest—this is particularly true for newer targets. Validated targets are targets for which modulation of their activity alters a disease state, and the best way to validate a target is through the use of a small molecule (or peptide or protein). A tool compound can be used for target validation; this is a molecule that has sufficient activity and ADME properties to answer basic biological questions about the target, but may not be suitable as a drug, perhaps because it is toxic or has other deleterious properties.
Box 2.14: What Surprised an Academician?
We started KAI with three drug candidates for three different clinical indications. When asked by the VC to rename them (to differentiate them from those used in my academic laboratory), I thought it was silly that they did not accept the names KAI 001, KAI 002 and KAI 003. In my naiveté, I was sure that we would not need to make more than 999 compounds after all the preliminary work in my university lab. I also did not realize that a company should not reveal to others how many compounds were made (e.g., if few were made, the IP might not be that strong). So instead of giving sequential numbers, our VC dubbed KAI-9803 based on my answers to “what year did you design that peptide?” and “where did it fall in the sequence of peptides you designed that year?”. –DM-R
Reduced Toxicity and Drug–Drug Interactions
There is a growing consensus that virtually all drugs have off-target effects, and it is important to understand these and determine whether they will cause adverse events. Toxicology is concerned with specific toxic effects, for example liver damage. A number of molecular substructures are known to have caused toxicity in the past, and medicinal chemists try to avoid having these moieties in their lead molecules. Ultimately though, it is impossible to predict whether a given molecule will be nontoxic without doing in vivo experiments.
Moreover, toxicity is not the only problem; there are many other “anti-targets” that a drug lead should avoid hitting. One of the most important is a cardiac ion channel protein called hERG, which when inhibited can cause severe and sometimes fatal heart problems. This has led to the withdrawal of several marketed drugs, and medicinal chemists today almost universally assess the hERG activity of their leads. The SAR of hERG binding is partially understood, and often medicinal chemists can engineer promising leads to maintain potency against the target protein and also avoid hitting hERG.
Similarly, many of the enzymes involved in metabolizing drugs (particularly a large class of enzymes called CYPs) can also be inhibited by small molecules, which can lead to drug–drug interactions if the enzymes in question are necessary for metabolizing other drugs. During the course of lead discovery it is important to measure CYP inhibition and, ideally, to make changes to the molecule to reduce or eliminate it.
Pharmacokinetics and pharmacology are both utterly dependent on animal models, but it is important to always remember that mice are not furry little people: drugs metabolized rapidly in mice may be stable in humans and vice versa. Because of such differences, obtaining animal data in at least two different species is usually necessary before moving a drug into the clinic.
Other Issues
A recent trend in medicinal chemistry is fragment-based drug discovery. Instead of starting with low micromolar IC 50 lead-sized or drug-sized molecules, this approach starts with smaller “fragments” with molecular weights one-quarter to one-half the size of typical drugs and potencies in the mid to high micromolar range. Because there are fewer small fragments than larger molecules (just as there are fewer two letter words than four letter words), it is possible to more efficiently screen chemical diversity. Moreover, smaller, simpler molecules are less likely to have extraneous bits that do not help the overall potency but cause problems with PK or PD. Of course, identifying and optimizing lower affinity molecules are challenges in their own right.
Box 2.15: The Bottom Line
Multi-parameter molecule optimization in the absence of complete data is what makes medicinal chemistry as much an art as a science. The fact that an acceptable solution may not exist can makes it a particularly frustrating art.
Ultimately lead optimization requires the medicinal chemist to improve numerous parameters simultaneously: potency, selectivity, solubility, PK and PD. Unfortunately improving one may exacerbate another. Medicinal chemistry requires picking the best possibilities to explore, even though it is impossible to gather all data for every compound.
In fact, there is no guarantee that it is even possible to produce a molecule that satisfies all the necessary parameters; targets for which this is the case are called “undruggable.” This multi-parameter optimization in the absence of complete data is what makes medicinal chemistry as much an art as a science, and the fact that a solution may not exist sometimes makes it a particularly frustrating art. The next time you take a drug, it is worth reflecting on the effort, skill, and serendipity that went into discovering that little molecular sculpture.
Box 2.16: Resources
1. Journal of Medicinal Chemistry ( http://pubs.acs.org/journal/jmcmar )
This is probably the premier journal for medicinal chemistry but has onerous requirements for compound characterization.
2. Bioorganic and Medicinal Chemistry Letters ( http://www.sciencedirect.com/science/journal/0960894X ).
Because medicinal chemistry papers are often not submitted for publication until years after the work has been completed, some compounds may be missing key data, and so many researchers, particularly in industry, publish in this journal. It has a lower bar to publication, but some excellent work appears here too.
3. In the Pipeline
( http://www.corante.com/pipeline/ )
This is probably the best chemistry-related blog out there. The author, Derek Lowe, is an experienced medicinal chemist who writes prolifically about a range of topics, and his posts attract dozens of comments.
4. Practical Fragments ( http://practicalfragments.blogspot.com/ )
For all things having to do with fragment-based drug discovery and early stage lead optimization, my blog is a good resource.
Vaccine Development
Harry greenberg.
Few if any biomedical interventions have been as successful at preventing morbidity and mortality as vaccines. The eradication of smallpox, the near eradication of paralytic polio and the potential reduction of the global burden of hepatocellular and cervical cancer are just a few of the many benefits that have been rendered by vaccines in the last 50 years. Along with their great impact, in many ways vaccines are one of the most egalitarian of all health interventions, since their benefits generally are well suited for delivery to both wealthy and poor countries alike. Therefore, vaccines have the ability to rapidly and efficiently alter the face of global health and well being.
Vaccines are molecular moieties (or antigens) that are administered to people via a number of routes, such as parenterally (e.g., intramuscular, subcutaneous, intradermal) or via a mucosal surface (e.g., orally or intranasally). In general, they are administered on only one or a few occasions because they are designed to work indirectly by eliciting a long lasting immune response in the host. They can be formulated of simple proteins or peptides, polysaccharides, nucleic acids, or complex mixtures of these constituents. In addition, vaccines can be created using complex infectious agents that are attenuated in some fashion and whose replication is restricted. These infectious agents can, on occasion, also be used to carry and express exogenous proteins.
To date, the most successful vaccines have been live attenuated infectious agents, inactivated infectious agents, or complex components of infectious agents or polysaccharides conjugated to protein carriers. Vaccines are employed to induce a host immune response that is either protective or therapeutic. Thus far, vaccines have been more effective as preventative, to avoid contracting the disease, than as therapeutic, after you have the disease, interventions. The general or even specific applicability of the “therapeutic vaccination” concept remains to be determined in humans.
Vaccination has been most successfully employed to prevent a wide variety of infectious diseases caused by many different viruses and bacteria. Vaccination against parasitic diseases has been much less successful. “Vaccination” has also been used with more limited success for treatment of allergy. In addition, a variety of experimental vaccines for the treatment of substance addiction, for birth control, and for treatment of autoimmune diseases have been studied but have not yet been widely successful. The remainder of this brief summary will therefore focus specifically on preventative vaccines against infectious diseases.
Box 2.17: Key Terms and Abbreviations
Antigen: entity that activates an immune response
Parenteral: routes for drug absorption outside the gastrointestinal tract
HIV: human immunodeficiency virus
CMV: cytomegalovirus
RSV: respiratory syncitial virus
HCV: hepatitis C virus
HA: hemagglutinin antigen
Adjuvant: compound that increases the host immune response to an antigen
Vaccine Efficacy
The past 50 years have witnessed the development of many highly successful new vaccines. The remaining important infectious disease targets, such as HIV, tuberculosis, malaria, CMV and RSV have remained much more difficult to prevent. Vaccine development has the highest likelihood of success when the natural infection induces a strong and enduring immunity to subsequent infection or illness. This was, for example, the case for smallpox, measles and hepatitis A and B. In other cases where reinfection can occur (usually at a mucosal surface) but secondary infection is not as often associated with severe sequelae, vaccination approaches have also been successful. This is the case with rotavirus and influenza vaccines.
When one or a few natural infections do not lead to the development of significant immunity—as is the case for HIV, HCV, gonorrhea, rhinovirus infection and malaria, for example—then it is likely that the pathway to an effective vaccine will be far more difficult. In these cases, it is likely that identification of novel immunization strategies will be required in order to develop a successful vaccine.
Two key elements in vaccine development are the availability of a predictive functional assay to measure vaccine response and a relevant animal model in which to test various immunization strategies. Animal models that replicate actual wild type infections of the microbial pathogen in the human host are most likely to be relevant. The duration, specificity and strength of the host response, as measured by a validated functional assay, are key determinants of the efficacy of the vaccine.
Box 2.18: What Surprised an Academician?
Unless the targeted disease is quite prevalent, a large number of patients must be included in vaccine trials to demonstrate efficacy—even for a highly effective vaccine. This can greatly add to development costs and duration.
How Vaccines Generally Work
Vaccines are designed to induce the host to mount an immune response that prevents or eliminates infection by the targeted pathogen. The induction of host immunity involves a variety of factors including many aspects of the innate immune system, the site of immune induction, the nature of the antigen, and the quantity and duration of antigen exposure. Each of these aspects needs to be carefully considered to maximize the chances of eliciting an acquired antigen-specific immune response that has functional therapeutic activity.
Whereas both T and B cell responses are often induced by vaccination, as a generality, most existing successful vaccines “work” at the effector level on the basis of the B cell and antibody responses induced. Many methods have been and are being examined to enhance the immune response to vaccines, including using an adjuvant to boost the innate immune response, using protein carriers to induce immune memory to polysaccharide antigens, and using replicating vaccines to produce more antigens with greater diversity at the site of infection. As mentioned above, when natural infection induces protective immunity, it has been relatively straightforward to design a vaccine that mimics the effective component(s) of that infection. When natural infection is not a very effective inducer of protective immunity, vaccine development has been much more difficult.
Some New Technologies in Vaccine Development
This short review cannot cover all the new technologies that are currently being explored to develop novel or improved vaccines. A few examples are provided to invite the reader to examine the field more extensively. Many pathogens avoid host immunity by altering or expanding their antigenic diversity. Examples include such diverse organisms as influenza, HIV and pneumococcus. Recent advances in immunology have demonstrated the existence of “common” or “shared” antigens on several pathogens, such as the finding that the influenza HA stalk is a target of a protective antibody. Such targets could provide an “Achilles’ Heel” to which the host can target its immune response and thereby circumvent the problem of pathogen antigenic diversity. Currently many investigators are working to design new vaccines directed at such shared antigens of influenza, HIV and pneumococcus, for example.
As an alternate approach, directed regulation of the innate immune response holds the promise of greatly enhancing the level and duration of acquired immunity following vaccination. Many investigators are now exploring the safety and efficacy of new adjuvants that directly target specific signaling molecules, thereby enhancing the innate immune response.
Finally, immunization using nucleic acids (either DNA or RNA) encoding antigenic proteins holds promise to greatly simplify vaccine manufacturing, while substantially reducing cost and enhancing safety. To date, such strategies have been highly promising in small animal models but less so in people. Continued innovation in this area, if successful, could greatly facilitate vaccine development.
Special Considerations Concerning Safety and Cost
There are a variety of factors that distinguish vaccine development from virtually all other areas of therapeutics development. Of course, like all other medical interventions, vaccines must be shown to be efficacious. However, unlike most other interventions, vaccines are generally given to healthy individuals with the intent of preventing a possible illness in the future rather than treating a current problem. Because of this fact, the level of tolerance for risk associated with vaccination is very dependent on the level of perceived danger from the infection being prevented. For example, when polio epidemics were common, the public clamored for a preventative intervention. However, since polio has disappeared from the Western hemisphere, even one case of immunization-induced polio per million vaccinations represents an unacceptable risk in the USA and Europe.
This common and pervasive concern with vaccine risk is often intensified because vaccines are most frequently given to young healthy children, who can be considered most vulnerable to untoward risk. In addition, the benefits of vaccination are most easily measured at the societal rather than the individual level because the odds that any given individual will be infected are frequently quite low. This dichotomy further complicates acceptance of vaccines by the public. Because of these factors, vaccine development often requires investment in very large and extensive safety testing before registration, as well as substantial post-licensing follow-up that is both expensive and complex.
Because vaccines are given to healthy individuals, because they are generally given only a few times during the life of an individual, and because of the prolonged regulatory pathway due to safety concerns as discussed above, they have frequently been perceived as providing a poor return on investment by drug developers. This is, of course, a shame, given their immense societal impact over the years.
Box 2.19: The Bottom Line
Although only one or a few doses of a vaccine are administered, vaccines are generally administered to healthy people, most often children, who are at low risk of acquiring the disease. As a result, the safety hurdle is very high, further adding to the time and cost of vaccine development.
Finally, many of the most important remaining challenges in the area of vaccine development (HIV, tuberculosis, malaria) are diseases that generally afflict the poor, disadvantaged and less developed regions of world. This fact has likely inhibited the rate of progress for these much needed interventions. Despite these issues, recent advances in immunology, material sciences and systems biology provide exciting opportunities for the vaccine innovators of the future. During the coming decade, we are likely to see vaccination for several of these challenging diseases reduced to practice.
When to Begin Animal Studies
We have identified a new chemical entity or a known drug that affects our validated target/pathway and have shown its efficacy in a cell-based assay. What is the next step?
Experts are divided on whether it is advisable to begin animal studies right away or whether it is better to first identify the optimal compound. By generating and testing analogs of the original “hit,” it may be possible to improve potency or specificity for the target. In vitro studies to obtain an optimal formulation for a drug or simply better solubility can also improve the chance for success once animal studies begin. And there are other considerations, such as in vitro assessment of drug toxicity and metabolism, including liver enzyme assays, hERG channel effects, etc. In other words, we can easily spend a year and thousands of dollars in studies aimed at improving our initial hit.
In vitro and cell-based assays are usually cheaper and faster to run than animal studies, but they are not always predictive of the in vivo behavior of the molecule—which is ultimately most important for determining if our hit will make a good drug. So, how are drug development programs to decide, with their limited funds, between screening lots of analogs in vitro versus testing only a handful of molecules in animals? As an academician that has followed over 70 programs in SPARK, my answer to this question is simple.
Take a Short Cut
We should start animal studies as soon as we can. It is true that many improvements to our compound can be made, but a short in vivo study can be extremely valuable in helping to optimize the compound and induce greater interest from partners and investors. A great deal can be learned from an imperfect drug. We might even be lucky and find that our compound shows a therapeutic benefit and drug-like properties!
We must also recognize that failure to demonstrate efficacy at this stage is not a reason to discontinue our project. These are exploratory studies and much can still be done to improve the compound’s selectivity, potency, solubility, bioavailability, safety, metabolism, route of administration and final formulation.
Box 2.20: Key Terms and Abbreviations
Alzet ® pumps: miniature osmotic infusion pumps for the continuous dosing of a drug to a laboratory animal
hERG channel: human Ether-à-go-go-Related Gene channel is a potassium ion channel that is important to normal electrical activity of the heart. Inhibition of this channel can lead to sometimes fatal cardiac arrhythmias
ip: intraperitoneal; within the abdominal cavity
sc: subcutaneous; beneath the skin
SAR: structure–activity relationship
What Animal Model to Use?
It is best to read the literature and use an animal model that is accepted in the field for the given indication. It is inadvisable to develop a new model for this first in vivo trial. Better yet, we can find a collaborator that is using this animal model and have them do the study for us. It is rare that such a study will generate new intellectual property—and the collaborator can provide an independent and unbiased assessment of our compound.
How to Deliver the Drug?
Even if we believe that oral administration is the ideal route for our clinical indication, it is ill advised to attempt to do the first efficacy study in animals using oral gavage. Instead consider intraperitoneal (ip) injection. If the drug is not very soluble, we can deliver the drug with ethanol, DMSO or polyethylene glycol; animals will tolerate quite a high dose of these solvents. If there is concern that the drug dose will be too low using ip injection, we can consider using a subcutaneous (sc) Alzet ® osmotic pump. The company’s Web site details a number of sizes and recommended solvents as well as training on how to implant them—all very easy.
Box 2.21: What Surprised an Academician?
When selecting a delivery formulation for these initial animal studies, simpler is always better. We once used over the counter beauty lotion for an initial topical delivery study because it had the desired aqueous formulation properties. –DM-R
Start with a Small Safety Study
To make sure that the drug dose is not fatal, we can inject a couple of healthy animals and observe them for a few hours for obvious signs of toxicity. A veterinary nurse can help with monitoring for adverse events. Once we know that the dose selected is not acutely toxic, we can jump into efficacy and longer safety studies in the chosen animal model of disease.
Learn as much as You Can from the First In Vivo Study
Animals are precious and should be used sparingly. Therefore, we should plan experiments carefully to include proper controls. If there is a drug that is known to be efficacious in the model, we should treat three to four animals with that drug to serve as a positive control. We can include a vehicle control if we are worried about effects of the vehicle. Otherwise, for this first study, just compare drug-treated to non-treated animals. When euthanizing the animals, we should collect as many organs and bio-fluids as possible for analysis. A pathologist can advise us on how to preserve the tissues and store samples for later analysis. We should attempt to collect as much data as possible relevant to our disease and to compound safety. The bottom line: we need to maximize the information obtained from this first set of studies.
If the Short-Cut Failed
We are not done! Remember that we have committed to take the long route even if the shortcut failed. We can go back and perform further SAR studies with analogs of our hit and additional studies on drug solubility and in vitro toxicity. We can now focus on correcting the problems identified based upon the first in vivo experiment.
If the Short-Cut Succeeded
Congratulations! The work has just begun. But now we have more compelling data that the project is worth pursuing. Make sure to consult Sect. 2.1 on robust preclinical work and Sect. 2.7 on in vivo pharmacology to plan your next steps.
Box 2.22: The Bottom Line
An early small in vivo study can be extremely helpful in demonstrating both efficacy and preliminary toxicity of our drug. Results can also inform further rounds of optimization of the compound. During initial animal studies, the drug should generally be administered using a parenteral route (ip or sc via osmotic infusion pump).
In Vivo Pharmacology: Multiple Roles in Drug Discovery
Simeon i. taylor.
Bristol-Myers Squibb, New York City, NY USA
Classical drug discovery relied primarily upon testing compounds for activity in established animal models. When following this paradigm, it was not necessary to ask questions such as why one conducted in vivo pharmacology experiments or whether there was value in studying animal models of disease. Rather, screening in various animal models was often the first step in the drug discovery process. The use of animal models played an essential and central role in the classical drug discovery process. In the past, the molecular target was frequently unknown at the time a drug was approved for use in patients. Indeed, as illustrated by the example of sulfonylurea drugs, the molecular target (e.g., the sulfonylurea receptor) was identified several decades after the drugs were in widespread use to treat type 2 diabetes mellitus.
How times have changed! Modern drug discovery most often relies on a radically different research paradigm. Target-based drug discovery has become so entrenched that some scientists actually question whether in vivo experiments in animal models have any value in the modern approach to drug discovery. This section illustrates the many ways in which in vivo pharmacology studies in experimental animals contribute to drug discovery.
Target Identification and Validation
How are drug targets identified in the first place? While there is no simple answer to this question, proposals for new targets are often based upon genetic experiments. Genetic diseases (either in humans or in experimental animals such as mice) can generate hypotheses suggesting potential drug targets. In some cases, a gene mutation (most often a loss-of-function mutation) causes disease. For example, homozygous loss-of-function mutations in the genes encoding either leptin ( ob/ob ) or the leptin receptor ( db/db ) cause obesity in mice. Based upon the identification in 1994 of a loss-of-function mutation in the leptin gene as a cause of obesity in mice, a biotechnology company paid a large sum of money to license the relevant intellectual property from an academic institution. In other words, a biotechnology company viewed this genetic evidence as compelling validation that leptin represented a therapeutic protein to treat human obesity.
Ultimately, the clinical studies in humans were disappointing. Although leptin is efficacious in rare human diseases associated with low leptin levels (e.g., mutations in the leptin gene or lipoatrophic diabetes), it did not deliver the desired efficacy in patients with the common forms of obesity. In short, the predictive value of leptin-deficient animal models was limited to predicting the response of leptin-deficient humans to pharmacologic therapy with leptin. However, most obese patients turn out to be leptin-resistant rather than leptin-deficient. Accordingly, human responsiveness to antiobesity treatments was better predicted by a leptin resistant model (i.e., the db/db mouse with mutations in the leptin receptor gene).
Box 2.23: Why Do Some Scientists Question the Value of Studies in Animal Models?
There are many examples where data obtained from experiments in animal models fail to predict the outcome of clinical studies. It would be fallacious, however, to infer that animal studies in general are entirely without value. Animal models are idealized versions of disease where all subjects are the same age (usually young), eat the same food, and have the same routines. Human subjects are much more varied, and so will have a more variable response to treatment. This is why it is very important to know the limitations of your chosen animal model when extrapolating to expected effect in humans.
What lessons can be drawn? There are many animal models. It is essential to exercise scientific judgment before extrapolating from an animal model to human disease. For example, multiple animal models of a particular disease may yield discordant predictions. Whereas the ob/ob mouse model suggested that leptin would be a highly efficacious treatment for obesity, the db/db mouse model predicted the exact opposite. It is often necessary to carefully compare results from animal models to clinical specimens or observations to assess the predictive value of a particular animal model for a particular human disease.
There are at least three other limitations which make it difficult to extrapolate from genetic models such as knock-out mice:
- Because mutations are present at the earliest times in development, there can be important developmental effects which might not be relevant to pharmacology in adult animals. For example, if a mutation in a particular gene impacts development of an organ, this would have a profound effect upon physiology. Pharmacological inhibition of the function of the same gene product in an adult animal would not necessarily lead to the same physiological deficit.
- Many loss-of-function mutations cause disease. Accordingly, to treat the disease it may be necessary to find a drug to activate the function of the gene product. However, as illustrated by the example discussed above, leptin was not an efficacious treatment of obesity despite the fact that leptin deficiency causes obesity. In contrast, there are examples where loss-of-function mutations have been shown to promote health. For example, loss-of-function mutations in Pcsk9 lead to decreased LDL levels, thereby decreasing the risk of cardiovascular events. Subsequent data demonstrated that loss-of-function mutations in the Pcsk9 gene reliably predicted the pharmacology of Pcsk9-neutralizing antibodies.
- It seems likely that loss-of-function mutations may accurately predict the pharmacology of inhibitors or antagonists. For a variety of reasons, agonists and activators may not always exert pharmacological effects which are the opposite of the phenotype of loss-of-function mutations.
Box 2.24: Key Terms and Abbreviations
Pharmacodynamic efficacy: the ability of a compound to affect the in vivo activity of a target
Disease efficacy: the ability of a compound to improve the effects of disease
Off-rate: the rate of compound release after binding to the target; irreversible binders have a zero off-rate
Drug exposure: also called the AUC (Area under the curve); the integral under a plot of plasma drug concentration versus time
Pro-drug: a compound which requires metabolism after administration in order to show therapeutic activity
PK: pharmacokinetics; measurements of the absorption, distribution, metabolism and excretion of a molecule after administration
PD: pharmacodynamics; measurements of drug action in the body (e.g., target inactivation, receptor off-rate, etc.)
NOAEL: no observed adverse effect level
Assessing Efficacy During Lead Optimization
As a prelude to discussing the role of animal experiments in the lead optimization process, it is important to distinguish between two concepts:
- Pharmacodynamic efficacy . This refers to the ability of a compound to engage the molecular target in vivo , and also to modulate in vivo biology. Among other things, this requires that the compound be delivered in appropriate concentrations to the biological compartment where the target resides. It also requires that the pharmacokinetics will provide sufficient exposure of the drug to the target. There are at least two complementary approaches to assessing pharmacodynamic efficacy. (a) In some cases, it is possible to assess target occupancy (e.g., by assessing the ability of a drug to inhibit binding of PET ligands to the drug target). (b) It is often useful to assess the function of the target (e.g., by assessing the ability of a protein kinase inhibitor to decrease the phosphorylation state of a specific kinase substrate).
- Disease efficacy . This refers to the ability of a compound to ameliorate the manifestations of a disease. Needless to say, evidence of disease efficacy in an animal model is frequently interpreted as suggesting that the drug will also be efficacious in human disease. This expectation is not always borne out. Nevertheless, this is not a reason to entirely abandon the use of animal models simply because they are imperfect predictors of human pharmacology. Situations in which compounds show strong pharmacodynamic efficacy but lack disease efficacy can also occur, and suggest target validation was in an over-simplified model of disease.
Whether or not they turn out to predict disease efficacy in humans, animal models provide essential information for the pharmaceutical R&D process. For example, animal models can provide important insights for lead optimization:
- Which parameters of in vitro pharmacology best predict disease efficacy? In many cases, the in vitro potency (e.g., the thermodynamic affinity with which the compound binds to its target) will be the best predictor. However, in some cases, the kinetic off-rate may be more relevant. For example, when neurotransmitters are released at synapses, this leads to very high local concentrations that persist for short durations. If a competitive antagonist has a rapid off-rate, this will allow the high concentrations of neurotransmitter to compete effectively with the drug. In contrast, if the drug has a slow off-rate, the drug will remain bound to the target during the brief time the neurotransmitter achieves its peak level. In this nonequilibrium condition, a drug with a slow off-rate will out-perform a drug with a fast off-rate even if both drugs have the identical in vitro potencies during equilibrium binding conditions.
- Which parameter(s) of drug exposure best predict disease efficacy in vivo ? In some cases, peak drug levels drive disease efficacy—e.g., for transcriptional activators that promote expression of long-lived proteins. In other cases, drug exposure (the integral of drug concentration over time) drives disease efficacy—e.g., if it is necessary to sustain inhibition of a target for 24 h a day.
- Does the drug reach the appropriate compartment to drive disease efficacy? Sometimes, a drug can accumulate in an organ because it is tightly bound to an irrelevant protein. To derive the desired pharmacology, it is necessary to achieve sufficient levels of free drug to drive the required occupancy of the correct molecular target.
- Do metabolites show pharmacological activity? In some cases, compounds will undergo metabolic transformation into active species. In some cases, the administered compound (i.e., the “prodrug”) may be inactive and undergoes metabolic transformation into an active species. For example, prednisone is inactive, but must be converted into the active compound, prednisolone, by 11β-hydroxysteroid dehydrogenase. In other examples, active metabolites are to blame for a compound’s undesired side effects; in which case medicinal chemistry efforts will modify the lead compound to reduce that mode of metabolism. In vivo pharmacology experiments are essential to identify and quantitate the levels of drug metabolites and also to assess their contribution to overall pharmacology.
- What is the projected human dose? As part of the feasibility assessment, it is necessary to estimate the expected dose required for efficacy in humans. There are at least two factors which enter into the dose projection: first, quantitation of the exposure required for efficacy in at least one animal model; and, second, prediction of the expected pharmacokinetic (PK) profile in humans. The projection of human PK is generally based upon measurement of PK in multiple species (e.g., mouse, rat, dog, and nonhuman primate).
- How safe are the compounds and what is the therapeutic index? Safety assessment is generally conducted in two nonclinical species (one rodent and one non-rodent) prior to initiating human studies. The “no observed adverse effect level” (NOAEL) is defined as the highest exposure that can be achieved without causing adverse effects in the test species. The therapeutic index is defined as the ratio of the NOAEL exposure: efficacious exposure. To calculate the therapeutic index, it is essential to define the exposure required for efficacy in at least one animal model. This is one of the most important reasons why it is essential to have conducted efficacy studies prior to advancing a compound into development.
Identifying Clinical Biomarkers
Relatively long periods of treatment are often required to assess efficacy in human disease. Prior to embarking upon such studies, it is essential to define the relevant dose range to study. Toward that end, it is very useful to assess the effect of the drug on translational clinical biomarkers. For example, in the development of sodium-dependent glucose transporter-2 (SGLT2) inhibitors as antidiabetic drugs, it was possible to assess the drug’s pharmacodynamics (PD) efficacy by measuring excretion of glucose in the urine. There are at least two questions which must be addressed in order to interpret clinical biomarker data:
- Does the biomarker predict disease efficacy? In the case of SGLT2 inhibitors, loss of glucose in the urine is a direct consequence of inhibiting the transporter that mediates reabsorption of glucose from the glomerular filtrate. In addition, loss of glucose in the urine is the key mechanism that drives the decrease in plasma glucose levels. This line of reasoning provides a compelling rationale to believe that glucosuria is a valid biomarker to predict glycemic efficacy in patients with type 2 diabetes.
- What degree of change in the biomarker is required to drive disease efficacy? By studying the biomarker in animal models of disease, it is possible to obtain experimental data to calibrate the biomarker relative to assessments of disease efficacy. There is no guarantee that the calibration derived from animal models can be extrapolated quantitatively to human disease, but it does provide a reasonable starting point. In the absence of such data from animal models, clinical investigators have no alternative but to guess at how to calibrate the biomarker.
In vivo pharmacology studies in animal models make critical contributions to many aspects of pharmaceutical R&D—including target identification, target validation, lead optimization, safety assessment, and translational biomarker identification, validation, and calibration. Unfortunately, for a variety of reasons, nonclinical studies are only imperfect predictors of clinical pharmacology. Nevertheless, perfection is seldom achieved in human endeavors. While researchers must take this limitation into account, it would be a mistake to let the perfect be the enemy of the good.
Pharmacokinetics and ADME Properties
Werner rubas.
12 NEKTAR, San Francisco, CA USA
Emily Egeler
13 Stanford University School of Medicine, Stanford, CA USA
Initial screening efforts and secondary assays to identify compounds with desired efficacy and specificity for the intended target focus on issues of pharmacodynamics (PD), which in layman’s terms can be defined as “actions of a molecule (drug) on the body.” For a drug to be successful, however, the active molecule must be able to reach the intended target at high enough concentrations and for a long enough time to exert its therapeutic effect. The body must also be able to remove the active molecule without significant buildup of toxic species, or the drug will fail in clinical trials. These considerations are evaluated in pharmacokinetic (PK) studies; summed up as “actions of the body on a molecule.”
Pharmacokinetic studies measure the absorption, distribution, metabolism, and excretion of an administered molecule—often abbreviated as ADME characteristics.
Box 2.25: Key Terms and Abbreviations
ADME: Absorption, Distribution, Metabolism, and Excretion
CYP: Cytochrome P450, a class of enzymes important in drug metabolism
Polymorphism: genetic variation in enzymes that affects their activity and leads to differences in drug metabolism rates
iv: intravenous
SDPK: Single dose pharmacokinetic
SAD: single ascending dose
Key ADME Parameters
ADME characteristics depend on both intrinsic properties of the molecule such as p K a, size, and lipophilicity; and extrinsic properties such as formulation or route of administration. Excellent resources exist for detailed description of the influence of each pharmacokinetic factor discussed briefly below [ 14 ].
Important ADME characteristics include those listed below and pictured in Fig. 2.2 :
- Bioavailability ( F )—The percentage of an administered dose that reaches the systemic circulation. Molecules administered intravenously have 100% bioavailability, whereas molecules delivered topically or orally with a high first-pass effect would have a lower bioavailability.
- Volume of distribution ( V d )—The apparent volume required to dissolve the administered dose at the drug concentration measured in the plasma. For a drug retained exclusively in the vascular compartment, the volume of distribution is equal to the plasma volume (0.04 L/kg body weight). For a drug that is extensively bound in peripheral tissues, the V d can greatly exceed the total body volume.
- Clearance (CL)—A fraction of blood or plasma volume completely purified of drug per unit time. Total CL depends on elimination rate constant ( t 1/2 ) and V d. Clearance at specific organs, such as liver, kidneys, skin, lungs, etc., is dependent on the blood flow through the organ; so disease states can alter drug clearance. Intrinsic clearance (CL int ) refers to the measured in vitro clearance.
t 1/2 = ((ln 2) × V d )/CL
- Area under the curve (AUC)—The integral under a plot of plasma drug concentration versus time. The AUC reflects the “total exposure” from a single dose of drug. The dose normalized ratio of AUC oral /AUC intravenous yields bioavailability.
- First pass effect—The extent of metabolism that occurs before an orally administered drug enters the systemic circulation.
Plasma concentration curve with PK metrics
Drug Metabolism and Drug–Drug Interactions
The simplest form of elimination is direct excretion of an unchanged drug molecule into the urine, bile, or occasionally tears, sweat or air. More commonly, molecules undergo biotransformation, a process of metabolism that involves building or breaking chemical bonds within the molecule to improve the body’s ability to excrete it. Biotransformation is grouped into Phase I and Phase II reactions; Phase I enzymes catalyze oxidations, reductions and/or hydrolysis to introduce or unmask functional groups in the molecule. Phase II enzymes conjugate endogenous small polar molecules to the unmasked functional groups to inactivate the drug and improve its water solubility for elimination. A drug may be subject to Phase I metabolism, Phase II, or both. Sometimes, knowledge of a drug’s metabolism is exploited by chemists to devise a prodrug, a molecule whose metabolism creates the true therapeutically active compound, to improve ADME properties.
The cytochrome P450 (CYP) family of enzymes is composed of a number of related isozymes and is responsible for a major portion of drug Phase I metabolism. CYP enzymes are primarily located in the liver, but also occur in a number of other tissues. CYP isozymes differ in their abundance and importance to metabolism across different tissues. For instance, the CYP3A4 isoform is very abundant in the liver and intestinal epithelium and contributes to the biotransformation of almost one half of drugs, whereas CYP2D6 is one of the least abundant isozymes and yet is involved in the metabolism of a quarter of all drugs [ 15 ].
Identifying which CYP isozymes are responsible for metabolism of the lead compound, called reaction phenotyping, is important for two reasons. First, a number of genetic polymorphisms have been identified for CYP isozymes. Polymorphisms result from inherited differences in enzyme expression or mutations that alter enzyme activity. These differences create variation in the rates of drug metabolism within a patient population. Dosing regimens may need to be adjusted to properly treat slow or ultra-fast metabolizers.
The second reason for reaction phenotyping is that many drugs display off-target activity on CYP isozymes, acting as inhibitors, inducers, or both. Co-administered molecules may show altered metabolism to that of a single drug. These drug–drug interactions must be carefully screened for, as they can either negatively (creating side-effects) or positively (improving ADME properties) impact the metabolites produced.
In Vitro Experiments
Initial studies of ADME characteristics are likely to be in vitro due to the high cost of animal studies. Although algorithms exist to extrapolate in vitro data to living systems, preliminary in vivo studies should be performed to confirm that in vitro data are indeed predictive. If the results are in concurrence, a strategy of in vitro screening with limited in vivo testing can be adopted. This approach allows more rapid and cost-effective identification of compound liabilities and better selection of a formulation before moving into animal models.
A number of different test systems are available to measure the in vitro or intrinsic clearance (CL int ) and are listed in Table 2.1 . CYP reaction phenotyping is typically done with panels of purified enzymes and their cofactors. Systems derived from human material are preferred for identifying drug metabolites, but other animal models are important for initial studies of drug safety. In vitro experiments are useful for reaction phenotyping, screening for drug–drug interactions, measuring intrinsic clearance, and identifying metabolites.
In vitro test systems for intrinsic clearance
Test system | Specific models |
---|---|
Cell extracts | • S9 fraction (Phase I and II) • Microsomes (Phase I only) |
Cell culture | • Hepatocytes (fresh or cryopreserved) • HepG2 cells transfected with CYP isozymes |
Whole tissue | • Liver slices |
In addition, there are a number of in vitro models (Caco-2, MDCK, mucosal tissues and skin) to predict absorption via different routes of administration.
In Vivo Experiments
The goal of in vivo PK experiments is to calculate bioavailability, AUC, volume of distribution and half-life while validating the clearance and metabolite identity data collected from in vitro studies. The FDA requires safety studies in at least two mammalian species, including one non-rodent species. These animal studies in concert with pharmacokinetic and pharmacodynamic studies will help predict the dosing range and regimen for desired therapeutic effect and expected safe dose in humans before starting phase 1 clinical trials. Because upper dosing levels are usually set at the appearance of adverse side effects (or in the case of oncology drugs, severe adverse effects), in vivo pharmacokinetics studies go hand-in-hand with toxicology studies. For this reason, some people refer to in vivo testing as ADMET studies.
Initial in vivo PK studies should be done in rodents, preferably rats, in parallel with the in vitro testing. Dosing routes should include intravenous (IV) and the intended clinical route of administration, often oral (po). To gather as much PK data as possible, both urine and blood samples should be collected; with other fluids such as cerebrospinal fluid, perspiration, or breath collected as applicable. The second species for in vivo testing, often dogs or monkeys, should be chosen based on program-specific issues such as metabolite profile and pharmacology.
The first test is often a single-dose pharmacokinetic (SDPK) study to follow the ADME properties of a single bolus of administered drug. Samples are collected at many time points to create a plasma concentration curve similar to that shown in Fig. 2.2 . Once the compound’s ADME characteristics look promising, animal PK studies move into single ascending dose (SAD) experiments to establish the maximum acutely tolerated dose. Further studies with radiolabeled drug are used to confirm the identity of major metabolites and look at drug deposition in different tissues.
The Bottom Line
Pharmacokinetic studies tell researchers how the lead compound is absorbed, distributed, metabolized and excreted from the body. In vitro PK testing is used to identify initial metabolism rates and routes, in addition to identifying potential drug–drug interactions. In vivo PK testing is essential for establishing a pharmacokinetic/pharmacodynamic relationship and the maximum tolerated dose and therapeutic window in animals for a lead compound, which becomes the basis for planning safe and effective doses moving into human trials. Because different crystal forms, salts and formulations of the same compound can have different ADME characteristics, it is very important to show favorable PK properties before scaling up GMP production for clinical trials to avoid costly reformulation delays. Proper PK studies can help drug developers maximize their therapeutic window between minimum efficacious dose and maximum tolerated dose.
Route of Administration and Drug Formulation
Terrence f. blaschke.
The route of administration and the formulation of a drug are often intertwined by virtue of the chemistry and the desired onset and duration of action of the drug. The route of administration of a drug can be broadly separated into three categories: (1) Enteral, (2) Parenteral, and (3) Topical. In each of those categories, there are a number of subcategories, as follows:
- Buccal or sublingual
- Slow infusion, then stop
- Continuous infusion (long-term)
- Continuous infusion (long-term, e.g., insulin)
- Transdermal (intended for systemic effects)
- Epidermal/dermal (intended for local effects at site of administration)
- Vaginal (usually intended for local effects)
- Pulmonary inhalation (intended for local or systemic effects)
Each of these routes of administration requires a different type of formulation. Many companies are developing drug delivery technologies involving oral, nasal, inhalation, transdermal, and parenteral delivery platforms.
Box 2.26: Key Terms and Abbreviations
Enteral: routes for drug absorption through the gastrointestinal tract
Buccal: in the mouth
Sublingual: under the tongue
Intranasal: in the nose
Bolus: a single large dose of drug
Depot: store of drug deposited in the body that is slowly released over time
Bioavailability: the fraction (or percent) of the dose of chemically unchanged drug found in the blood based on the route of administration
SR: slow release
XR: extended release
API: Active Pharmaceutical Ingredient
Bio-betters: new formulations of biologic therapeutics to improve dosing schedule or route of administration
Therapeutic index: the ratio of the toxic dose to the effective dose; a larger therapeutic index suggests a larger safety window
The most common, desirable, and usually the least expensive route of administration is the oral route; especially if the drug is intended for multiple doses or chronic administration. However, for many drugs, the oral route may not be feasible or practical, as the drug may show poor oral bioavailability and not reach the systemic circulation after oral dosing. For the oral route, there are many forms (tablet, capsule, liquid, suspension, etc.) chosen and manufactured on the basis of the bioavailability of the drug.
Another important characteristic of an oral formulation is its rate of absorption. In some settings, rapid absorption is desirable, to achieve a rapid onset of action (e.g., drugs given for pain or for sleep). Tablets may be formulated as “quick dissolve” versions. In other settings rapid absorption is problematic, as the high peak concentrations associated with rapid absorption may result in unwanted side effects, sometimes serious or life-threatening. There are many examples of this in the cardiovascular field.
There are a number of special formulations used for oral administration intended to prolong the duration of action and/or avoid high peak concentrations. These are often called “slow release” (SR) or “extended release” (XR) formulations, to distinguish them from immediate release formulations. Such formulations may allow a drug to be administered at longer dosing intervals that improve patient adherence to the medication (e.g., once instead of twice daily, or twice instead of three times daily). Other special oral formulations include enteric-coated formulations that protect the drug from the acidic environment of the stomach and dissolve in the intestines, or fixed-dose combinations containing two or more active pharmaceutical ingredients (APIs) that are used for conditions benefiting from combined drug therapy (e.g., hypertension, diabetes and HIV).
Box 2.27: What Frustrated an Academician?
Not all drugs reach their target when delivered in a simple formulation. Proper formulation and route of delivery is also critical when using new pharmacological agents for basic research, whether in culture or in vivo . It is important to include studies on drug stability and distribution for each formulation of a new pharmacological agent.
Parenteral Route (Injectables)
For drugs that cannot reach the systemic circulation after enteral or transdermal administration, or for drugs for which a very rapid onset of action is needed, parenteral dosage forms are required. Parenteral routes also avoid the first-pass metabolism in the liver experienced by orally administered drugs. For direct intravenous administration, the drug must be solubilized in a liquid suitable for direct injection into a vein, or—much less commonly—into an artery. Speed of injection (bolus, slow infusion or constant infusion) is dependent on the indication. For anesthetics and sedative/hypnotics used in procedures, and for some cardiac arrhythmias, slow bolus injections are often used. However, for many other agents that are not orally available (e.g., many anticancer agents and the rapidly increasing number of biologics on or close to the market) a slow infusion is preferable to avoid toxicity associated with high peak concentrations and rapid distribution into tissues where unwanted effects can occur (e.g., the central nervous system, heart or other vital organs). With the advent of reliable, miniaturized infusion pumps, there is increasing interest in research evaluating whether the therapeutic index could be improved by longer-term infusions. The subcutaneous infusion of insulin is an example of this approach to therapy of diabetes. Examples in other chronic diseases will no doubt follow.
Epidermal or Transdermal Route
Epidermal or transdermal formulations are generally patches or gels. If systemic absorption is the goal of transdermal delivery, there are many characteristics of the drug that may limit this route. In particular, drugs must be of high potency, be able to penetrate the epidermis, and benefit from a fairly constant concentration in the blood. Alternatively, transdermal or epidermal routes may be selected to deliver a high local concentration of drug and avoid systemic exposure. There is increasing interest in this route of administration. Patches are easy to use (improving patient adherence), provide continuous dosing of a steady drug concentration, and avoid first-pass metabolism. A number of companies are developing new technologies to improve transdermal absorption. A few examples of very successful transdermal systemic delivery systems include the opiate pain reliever fentanyl, contraceptive patches, and clonidine for hypertension. Examples of successful drugs used for local effects include topical steroids, antibiotics and local anesthetics.
Biologics Require New Delivery and Formulation Methods
The rapid increase in the number of biologics already on the market or in the pipeline has resulted in a dramatic increase in the development of new technologies to improve their delivery and efficacy/toxicity. A 2010 survey, conducted by Global Industry Analysts, forecasts that protein drug sales will be worth more than $158B by 2015 and expects therapeutic antibodies to emerge as the market leaders. Of new first-in-class agents approved between 1999 and 2008 and having novel molecular mechanisms of action, 50/75 (67%) were small molecules and 25/75 (33%) were biologics. Many, such as rituximab (Rituxan ® ), bevacizumab (Avastin ® ), epoetin alfa (Epogen ® ), and etanercept (Enbrel ® ) are multibillion dollar markets, and several are coming off patent in the next few years. This has resulted in an emerging market to devise parenteral formulations to produce so-called “bio-betters” that require less frequent administration and have an improved therapeutic index. A recent survey found that there are more than 20 independent drug delivery companies doing research on controlled release depot injection formulations, along with most of the major pharmaceutical companies that have internal programs. The formulation technologies that are being explored in these efforts to deliver biologics include microspheres, liposomes, microparticles, gels, and liquid depots (see examples listed in Box 2.30). Currently there are 13 depot products on the market, and the market size for such products is estimated to be >$2 billion dollars.
Box 2.28: What Surprised an Academician?
A formulation consultant suggested that we formulate our intracoronary drug at pH 3, as the drug was more stable in acidic conditions. Supporting his arguments, he cited a few drugs on the market. Luckily, our clinical director knew that the drugs mentioned produced phlebitis and helped me, the basic researcher, to push back on that formulation recommendation. Consultants are not always right and, if something does not seem right, we should do our own diligence. –DM-R
Box 2.29: The Bottom Line
There is a process of trial and error leading to the identification of optimal formulation. Understanding the clinical setting and drug dosing for the patients is critical for proper formulation development. Compromise may be required to fit the pharmacodynamics and chemical properties of the API.
Box 2.30: Suggested Resources
• Liechty WB, Kryscio DR, Slaughter BV and Peppas NA (2010) Polymers for Drug Delivery Systems . Annual Review of Chemical and Biomolecular Engineering. 1:149–173. (Broad and comprehensive review of this topic. Contains 149 references and other related resources)
• Wang AZ, Langer R, Farokhzad OS (2012) Nanoparticle Delivery of Cancer Drugs. Annual Review of Medicine 63: 185.
• Timko BP, Whitehead K, Gao W, Kohane DS, Farokhzad OC, Anderson D, Langer R (2011) Advances in Drug Delivery. Annual Review of Materials Research 41: 1. (This review discusses critical aspects in the area of drug delivery. Specifically, it focuses on delivery of siRNA, remote-controlled delivery, noninvasive delivery, and nanotechnology in drug delivery.)
• Rowland M, Tozer TN (2011) Clinical Pharmacokinetics and Pharmacodynamics: Concepts and Applications, 4 th Edition. Wolters Kluwer/Lippincott Williams and Wilkins, ISBN 978-0-7817-5009-7. (These authors and this book are recognized worldwide as the authorities in teaching the basic principles of pharmacokinetics and pharmacodynamics. Each chapter contains Study Problems (with answers!) and by purchasing the text can be accessed anywhere that you have an internet connection. Pharmacokinetics and pharmacodynamics simulations are also available on the Web site.)
• NIH Clinical Center “Principles of Clinical Pharmacology”
• http://www.cc.nih.gov/training/training/principles.html
• (This course is taught by faculty members from the National Institutes of Health (NIH) and guest faculty from the Food and Drug Administration (FDA), the pharmaceutical industry, and several academic institutions from across the USA. Course materials are available online via the above URL. )
• American College of Clinical Pharmacology, Educational Offerings
• http://www.accp1.org/videos.shtml
• (This Web site has a free course on pharmacogenomics, covering 13 different modules, each having overview and depth sections. There is also a Web-based course on pharmacometrics.)
Preclinical Safety Studies
Michael taylor.
15 Non-Clinical Safety Assessment, San Francisco, CA USA
16 Chemical and Systems Biology, Stanford University School of Medicine, 269 Campus Drive, Center for Clinical Science Research Rm 3145c, Stanford, CA 94305-5174 USA
“ Primum non nocere ,” translates from Latin to “First, do no harm.” This fundamental ethical principle in the practice of medicine is equally applicable when exposing individuals to investigational drugs. Virtually all substances can be toxic to human beings if the dose is high enough. Even drinking excessive quantities of water or breathing 100% oxygen for prolonged periods can result in severe organ damage or death. Therefore, when administering a novel compound to human subjects, we have both an ethical and legal duty to ensure that the risk has been minimized as much as possible.
Safety is difficult to prove without extensive human exposure. Lack of safety, on the other hand, can be proven. We perform preclinical safety studies to better characterize the likely effects and the risk/benefit ratio of administering a novel compound to humans. While experiments using cell lines and animal models will not mirror with certainty what will happen in human subjects, the results can be extremely helpful in predicting dose-limiting side effects and appropriate dose ranges.
The US Food and Drug Administration (FDA) and the International Committee on Harmonization (ICH) have developed guidance documents that outline a series of in vitro and in vivo experiments that should be conducted prior to each phase of clinical development for a new molecular entity (NME). These studies help predict the drug’s on-target and off-target toxicities, reversibility of these toxicities, limits on the dose and duration of treatment, early predictors or signals of impending serious toxicity, and safety margin between the doses where efficacy and dose-limiting toxicity occur. Additional studies are performed to further characterize the drug’s pharmacologic effects on major organ systems, pharmacokinetics, metabolism, and likely interactions with food or other drugs. Preclinical safety studies that will be submitted to regulatory agencies to support subsequent clinical testing must be performed according to Good Laboratory Practice (GLP). GLP studies require extensive documentation of each study procedure and are quite costly.
Box 2.31: Key Terms and Abbreviations
FDA: US Food and Drug Administration
ICH: International Committee on Harmonization; joint effort of European, Japanese, and US regulatory authorities and pharmaceutical industries to provide uniform standards and guidance regarding drug development
NME: New Molecular Entity; a new drug submitted to the FDA Center for Drug Evaluation and Research (CDER)
C max : peak plasma level of a drug that is achieved after dosing
AUC: Area under the curve; plasma concentration of a drug integrated over time after dosing
Excipients: inactive materials (e.g., fillers, binders, coatings) included in the drug product formulation
Although there is always opportunity for discussion and negotiation, the FDA (and other regulatory agencies) typically requires a specific battery of nonclinical safety studies to be completed before advancing to phase 1 human studies. In general, the duration of drug exposure in animal studies should equal or exceed that of subsequent clinical studies. Therefore, additional general animal toxicology studies of longer durations are often performed to support increasing duration of clinical dosing prior to phase 2 and phase 3 studies. Specific studies of relatively long duration assessing reproductive toxicity and carcinogenicity are generally required before exposing large numbers of patients to study drugs in phase 3 studies.
The guidance documents include discussions of various types of studies to assess specific toxicities including safety pharmacology of the cardiovascular, pulmonary, and neurologic systems; genotoxicity; reproductive toxicity; and carcinogenicity. In addition, they outline preclinical safety requirements for specific disease indications (e.g., oncology).
In addition to identifying possible toxicities, nonclinical safety studies are also important for identifying potential biomarkers for monitoring untoward effects, establishing the first dose to be administered to humans, and establishing the upper limits of dosing (exposure) in humans. This latter purpose is particularly important when severe or non-monitorable toxicities are encountered.
Guidance regarding the development of approved drugs for new indications, by comparison, is limited. Specifically, there is a guidance that speaks to the kind of animal studies that are required for reformulated old drugs (also termed repurposing or repositioning). There is also an FDA expectation that an old drug being developed for a new indication meet current regulatory standards.
Before conducting animal studies, it is important to define how the drug will be given to patients: formulation, route of administration, and frequency of dosing. Generally speaking, animal testing should make use of the same formulation and route of dosing to be used clinically. Both the excipients (inactive ingredients of the final formulation) and active pharmaceutical ingredient (API) need to be considered and evaluated. It is important to appreciate that excipients are scrutinized during the approval process similarly to the drug under development.
When determining which excipients to include in the final formulation, the FDA inactive ingredients listing can be useful. A novel excipient or novel use, outside the limits of its current use (e.g., route, dose), will normally require additional evaluation. The use of some excipients is limited by toxicity (e.g., dimethylacetamide, cyclodextrin) and therefore it is necessary to carefully consider the excipient dose and the patient population for which the product is intended. A good strategy for excipient evaluation is to use the clinical formulation without API as the vehicle formulation (control group) in animal studies. It is also advisable to include an additional negative control group, to confirm lack of effects by the excipient.
The selection of the API lot for animal testing is also important. The tested material should be representative of the material intended for clinical use, such that the impurity profile should be both qualitatively and quantitatively similar to the clinical material. There are several guidances that discuss the acceptable limits of API impurities and the necessary steps for impurity qualification when such limits are surpassed. A good practice, particularly for the IND-enabling studies, is to use the same lot of API for nonclinical safety studies that is to be used in the clinic.
Box 2.32: What Surprised an Academician?
The drug tested in GLP toxicity studies should not be too pure. If the clinical lot has higher levels of impurities than the toxicology lot, which can occur from manufacture scale up, further GLP toxicology studies will be required to characterize the potential toxic effects of the new or increased impurities. This can significantly impact development timelines and budgets. So it took me some time to understand, when told by the VP of Drug Development, that my pride in purifying our non-GLP material to 99.5% purity before using it in pig efficacy studies was misguided and potentially a very costly mistake. –DM-R
Appropriate dose selection is important to the conduct of useful and therefore successful animal studies. In part, success should be considered based on efficient use of animals. Although the use of two species of animal models is central to drug development and evaluation, there is an ever-increasing awareness and responsibility to follow humane practices and to thoroughly justify the need for animal use and numbers.
The fundamental premise of dose selection for animal studies is that the animal doses and exposures ( C max , AUC) should exceed those proposed for humans. Ideally, the high dose for animal studies is best selected by clear evidence of toxicity, such as decreased body weight gain, changes in clinical condition, or abnormalities in clinical pathology parameters. The low dose should be a small multiple (2–3×) of the projected clinical dose (exposure) and the mid dose should be set between the high and low doses. It is important to separate doses such that the exposures between groups do not overlap. For many orally delivered small molecules or parenterally delivered macromolecules, doses can be adequately spread using half log or log intervals. Since there is less pharmacokinetic variability for intravenous administration, the dose intervals can be smaller.
Because both dose and time influence toxicity, it is difficult to predict doses that will be tolerated for chronic administration. Therefore, it is best to plan studies of increasing duration sequentially. Selection of doses for the first studies can be challenging and one should draw on all available information. Whereas rodents are usually the species chosen for the early efficacy studies, there is typically no information available for dosing in the non-rodent model. If no or limited data are available, short duration non-GLP pilot studies (1–3 days) with minimal numbers of animals should be performed to assist in selecting the appropriate dose range. For compounds with limited evidence of toxicity, the high dose can be set based upon consideration of the animal exposure relative to humans and practical limits such as dose volume or API solubility.
This discussion provides an introduction to the types and extent of preclinical safety studies required to support drug development. Please also consult the previous sections on formulation and drug metabolism, as these are also important considerations for successful safety evaluation.
Box 2.33: The Bottom Line
When administering a novel compound to human subjects, we have both an ethical and legal duty to ensure that the risk has been minimized as much as possible. Preclinical safety studies help to minimize risk to human subjects by identifying potential toxicities, appropriate dosing ranges, and early signals of toxicity.
Box 2.34: Resources
Specific FDA guidance on Nonclinical Safety Studies:
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM073246.pdf
- Inquiry Skills / Starting with Inquiry Learning
Understanding the Difference Between Inquiry and Research
It is common to hear the question “what’s the difference between inquiry and research?” While it’s true that there are some similarities, inquiry and research are fundamentally different in many ways.
Inquiry-based learning is an approach to learning that emphasizes the exploration of questions and focuses on the process of discovery. On the other hand, research is a process that focuses on the establishment of facts and making conclusions based on a systematic study.
Below is a deeper dive into how inquiry and research differ, and how they are similar.
Why does this distinction matter?
Understanding the difference between inquiry and research is important for a few reasons:
1. Inquiry is a broad process that may involve different paths or procedures. Research is a more formal process with the goal of establishing facts. Inquiry focuses more on asking questions, whereas research focuses more on finding answers. The process of asking questions is one that requires a set of skills that needs to be practiced often.
2. The skills required for inquiry are far more broad and can be applied in a variety of contexts. For example, in an inquiry, students ask broad questions with multiple paths for learning. If students are simply conducting research, their questions will likely be more specific. We’ve put together a PDF of question prompts for inquiry learning to demonstrate the openness that inquiry questions should have.
3. Inquiry typically involves different procedures (depending on the discipline). On the other hand, research is more formal and systematic, meaning it is the same no matter what is being researched.
Related resources :
- Creating Strong Driving Questions for Inquiry Learning
- Hacking Questions (Connie Hamilton) – fantastic way to teach question formation
Scope and Depth
Both inquiry learning and the research process begin with questions. In an inquiry, students show curiosity towards a subject by asking high-quality inquiry questions . However, the point of asking questions isn’t to find an answer quickly. Since questions should come from a place of genuine curiosity, students should take their time exploring their questions in depth.
On the other hand, research focuses on finding an effective way of expediting the answer-finding process, which is the opposite of inquiry. It is a more formal process. It does not ensure that students are taking opportunities to explore new pathways or make connections to their own lives. Research can be scaffolded into simple, manageable steps to help students research more effectively.
Learning how to research is not a bad skill to learn; in fact, it is incredibly useful in many situations as learners. The difference is that, with inquiry learning, the scope of learning is broadened. Students are encouraged to think deeper about the content and ask questions they are genuinely curious about, as opposed to following a scaffolded process.
Suggested resource: How Scaffolding Works: A Playbook for Supporting and Releasing Responsibility to Studen ts (25% off right now on Amazon)
Different Focus
Most classrooms frame the process of learning in a linear way: “topic → research → present → assess”. Students are probably used to being given a topic and told to research it, collect facts, and present their learning. However, inquiry is different. While both inquiry and research aim to seek and uncover information, they go about it in a different way. In addition, research and inquiry teach a different set of skills to students.
With research, there is a more systematic approach used. Typically, teachers will spend a few lessons beforehand teaching students things such as:
- Typing in relevant search terms
- Judging whether a website is safe, reliable, and current
- Skimming and scanning skills
- Reading snippets
- Checking for bias
The goal with research is to find answers, explain concepts, and generally increase knowledge. The focus is on confirming facts and expanding knowledge.
On the other hand, inquiry is much broader. The focus is not on finding the “right” answer. The focus is on the process of exploration, solving a problem or query, and understanding something new. It is far more multifaceted and fluid than research, which is often more formal by nature. Inquiry can involve more than one search query, and might change as a result of new information. It is fluid, progressive, and flexible.
Related : 5 Simple and Effective Strategies for Managing Conflict in Inquiry Learning
Active learning.
By definition, active learning refers to any kind of work students do other than listening, watching, and note-taking. Many educators agree that student learning is enhanced when they are actively involved in their learning. With active learning, students are required to think more deeply and critically. Not only does this kind of learning develop students’ thinking skills, but also helps them to better retain their learning.
While the act of researching can be considered active learning, there isn’t as much creative thinking involved. This is partly due to the nature of research as being a systematic procedure for obtaining information. On the contrary, inquiry-based learning focuses more on the process of learning and involves things like group discussion, problem-solving, small activities, and teacher facilitation when needed. Furthermore, active learning can’t be reduced to formulaic methods like research can.
Recommended resource: Active Learning: 40 Teaching Methods to Engage Students in Every Class and Every Subject, Grades 6-12
Skills Gained
Because research is more formal and focused on finding answers, students can expect to improve specific skills. These include time management, search skills, analysis, organization, and general technology skills. Furthermore, their research methodology (the process by which you conduct research, including the tools you use and steps you take) will likely improve too. Students who research need to focus on specific keywords, analytical skills, and organizational skills in order to work with the facts they find. In addition, a heightened attention to detail means that students will likely improve their ability to cite or make references accurately. This is important since references and organizing your sources is a critical component of research.
The specific skills gained while conducting an inquiry are endless. What I’ve noticed is that the skills gained during inquiry learning tend to be more soft skills. For example, students demonstrate more attentive listening, self-reflection, collaboration, and responsibility.
In an inquiry, skills can be taught as mini activities. For example, students may need a short activity on how to analyze a map, or they may need some role-playing on how to communicate effectively. If you are teaching inquiry skills as mini activities, make sure to provide opportunities to use active learning and group work. Using scenario-based learning can be a great way to do this – not only does it challenge students to problem-solve, but it encourages them to work on their teamwork and communication skills.
Related : Using Inquiry to Teach Social Justice in the Classroom
Key takeaways:.
(1) Inquiry-based learning focuses on the process of discovery, while research is a process that focuses on the establishment of facts and making conclusions based on a systematic study
(2) Inquiry is more broad and unstructured, whereas research is more formulaic and narrow in scope, with the intent of finding specific answers
(3) Research values the expeditious discovery of facts and information, but inquiry learning usually happens at a slower and more organic pace
(4) Inquiry is far more multifaceted, flexible, and fluid than research, and often changes as a result of new information
(5) The skills gained by research are very specific and cannot always be transferred to every subject or situation; soft skills gained through inquiry learning tend to be more transferrable
Leave a comment below about the distinctions between inquiry and research, or join the conversation on Facebook !
- Next story A Complete Guide to Using Historical Inquiry in the Classroom
- Previous story 5 Exciting Autumn Inquiry Ideas for September
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Inquiry Skills / Planning & Teaching / Starting with Inquiry Learning
Creating Strong Driving Questions for Inquiry Learning
29 Jun, 2021
Activity Ideas / Planning & Teaching / Starting with Inquiry Learning
How to Create Learning Provocations That Get Students Excited
8 May, 2020
Activity Ideas / Science
Bringing SpaceX Into the Inquiry Classroom
19 Jul, 2023
Activity Ideas / Inquiry Skills / Planning & Teaching
Exploring Maslow’s Hierarchy in the Inquiry Classroom (Part 1)
7 Apr, 2021
Activity Ideas / Planning & Teaching
6 Back to School Inquiry Learning Examples
9 Aug, 2022
Free Resources / Math
50 Challenging Math Inquiry Questions
1 Jul, 2020
Privacy Overview
Home » Welcome to The Information Age » What is research?
« previous Page 3 of 5 next »
What is research?
It’s an entire process.
DICTIONARY DEFINITION
Please note, your computer will require the Flash 5 plug-in for the interactive example below to work. If you don’t have the Flash plug-in you can download it for free from (link will open in a pop-up window): Adobe .
How Research is Defined
A (link will open in a pop-up window) text description of "How Research is Defined" is also available.
A LIBRARIAN’S DEFINITION
It can be dull if you make it that way. Or, it can be as exciting as ’Dancing with the Stars.’
The following is an external link which will open in a pop-up window.
The ’Dancing with the Stars’ Web site includes multimedia presentation, details about the series, images, outlines, and more.
About the OLLC
- About this Site
Related Links
- University System of Georgia
- Welcome to the Information Age
- Starting Your Search
- What All Libraries Have
- A Primer on Databases and Catalogs
- The Great GALILEO
- Tips for Using the Internet
- Giving Credit Where Credit is Due
- Evaluating Sources
- For Distance Education Students
A project of the Board of Regents of the University System of Georgia
The potential for AI to change cancer drug discovery and development
The impending AI–driven life science revolution promises transformative effects on human health and well-being. An accelerated drug discovery process, for example, can help cure more diseases more quickly, opening additional resources that could then be applied to currently underserved areas. With rarer forms of disease, including some of the hardest-to-treat cancers, the potential for AI to impact drug discovery and development holds significant promise for saving lives, reducing costs, and catalyzing further research.
Our feature in the Rewriting Cancer digital series explores AI’s potential to accelerate drug discovery and development in cancer care, and what is required to get treatments to cancer patients twice as quickly at a cost that is one-third lower. The series is presented by UICC, and produced for them by BBC StoryWorks Commercial Productions.
Through our extensive expertise with a diversity of AI and Life Sciences clients, McKinsey is uniquely positioned to help tackle some of the most intractable problems in cancer care. Combining the expertise of the McKinsey Cancer Center and QuantumBlack, AI by McKinsey, we are proud to support clients in biopharmaceuticals, health systems and hospitals to unlock the power of AI to help treat, cure and prevent cancer.
Featured experts
Dr. Björn Albrecht
Senior partnerparis.
Alex Devereson
Partnerlondon, featured insights.
IMAGES
VIDEO
COMMENTS
A discovery is a preliminary phase of a design project. It can be initiated by many different kinds of problems, involve different-size teams, and include many research or workshop activities. However, all discoveries strive to gain insight into a problem space and achieve consensus on desired outcomes.
To borrow a definition from NN/g: A discovery is a preliminary phase in the UX-design process that involves researching the problem space, framing the problem(s) to be solved, ... Discovery research can easily be adapted to your particular context and rolled out incrementally. You can start with light-touch activities, and then scale up as needed.
Discovery research (also called generative, foundational, or exploratory research) is a process that helps designers understand user needs, behaviors, and motivations through different methods, such as interviews, surveys, and target market analysis. ... Define the scope of your research, and decide on the observation methods prior to session ...
We are working to improve the wider research ecosystem. We do this by supporting the development of tools, technologies and methodologies for innovation and success, and by bringing together the right expertise in the right environments. For example, our Discovery Research Platforms are a £73 million investment to overcome practical ...
Research is the "process" and "discovery" is the product. To name a few more differences, research can be extremely complex and diversified. Research supports all kinds of strategies and proactive thinking, while discovery is simple, irrespective of its subject. You simply find something.
Discovery Research Methods. Discovery research (also called generative, foundational, or exploratory research) is all about pinpointing the problem and getting a clearer picture of who you're solving it for. UX researchers use generative research methods that rely on direct observation, deep inquiry, and careful analysis—to understand their ...
Scientific discovery has had an indelible impact on our health and daily lives, but there is still so much to learn. Discovery research gives scientists the opportunity to take the risks needed to tackle the unknown - mistakes are part of the learning curve. The data that scientists generate guides new research endeavours to finding cures for ...
Discovery research is about how we define its goals, not about data collection methods to accomplish them. Barriers to Discovery Research. Unfortunately, there is much confusion about these two types of research, often driven by, egos, hubris, short deadlines, small budgets, and lack of knowledge of research fundamentals. ...
Discovery science (also known as discovery-based science) is a scientific methodology which aims to find new patterns, correlations, and form hypotheses through the analysis of large-scale experimental data.The term "discovery science" encompasses various fields of study, including basic, translational, and computational science and research. [1] ...
Continuous discovery enables members of the product team to have these direct conversations with customers, ultimately building their empathy and understanding. 3. Continuous discovery helps with enabling ideation and democratization. Democratization is one of those buzzwords in the UX research world.
Research in one of the major traditions, or paradigms, of research is often referred to as discovery. "Confirmatory research sets out to test a specific hypothesis to the exclusion of other considerations; whereas discovery research seeks to find out what might be important in understanding a research context, presenting findings as conjectural (e.g., 'suggestive', 'indicative') rather than ...
There was also a need to draw a distinction between the 'discovery' as the final end such as a paper or a dataset that has been discovered - the 'object' of the discovery - and the definition of 'discovery' as the set of practices (e.g. existing explicit and implicit skills) and tools (e.g. search engines, recommender systems ...
Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). Most philosophical discussions of scientific discoveries focus on the generation of new ...
Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable.
Discovery science investigates a huge range of topics within cancer biology - ranging from the process involved in the regulation of cell division to how cancer can evolve and adapt, from the immune system's interactions with cancer cells to the role of chemical signal networks in cancerous cell growth. Discovery science is often done on ...
Exploratory research is a methodology approach that investigates research questions that have not previously been studied in depth. Exploratory research is often qualitative and primary in nature. However, a study with a large sample conducted in an exploratory manner can be quantitative as well. It is also often referred to as interpretive ...
Exploration and discovery. The early stages of a scientific investigation often rely on making observations, asking questions, and initial experimentation — essentially poking around. But the routes to and from these stages are diverse. Intriguing observations sometimes arise in surprising ways, as in the discovery of radioactivity, which was ...
Discovery is the act of detecting something new, or something previously unrecognized as meaningful. Concerning sciences and academic disciplines, discovery is the observation of new phenomena, new actions, or new events and providing new reasoning to explain the knowledge gathered through such observations with previously acquired knowledge from abstract thought and everyday experiences. [1]
Research is the pursuit of new knowledge through the process of discovery. Scientific research involves diligent inquiry and systematic observation of phenomena. Most scientific research projects involve experimentation, often requiring testing the effect of changing conditions on the results. The conditions under which specific observations ...
The meaning of RESEARCH is studious inquiry or examination; especially : investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws. How to use research in a sentence.
When performing drug discovery research, we must be particularly attentive to the robustness of our experiments, because inability to reproduce academic data continues to be a sticking point when projects are transferred to industry. ... Before conducting animal studies, it is important to define how the drug will be given to patients ...
By definition, active learning refers to any kind of work students do other than listening, watching, and note-taking. ... Inquiry-based learning focuses on the process of discovery, while research is a process that focuses on the establishment of facts and making conclusions based on a systematic study (2) Inquiry is more broad and ...
2. studious inquiry or examination; especially : investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws. 3. the collecting of information about a particular subject.
Skip to main content. Our Insights
We chose 5n for further antiviral mechanism research, and the results showed that it can directly act on viral particles. The molecular docking results further confirmed the interaction of compound 5n and coat protein (CP). These compounds also exhibited broad-spectrum fungicidal activities against eight plant pathogens.
The purpose of this notice of funding opportunity (NOFO) is to support milestone-driven, early-stage translational research focused on drug discovery and development of novel therapeutics against select fungal pathogens including: Candida species, specifically Candida auris; Aspergillus fumigatus; Coccidioides; and Mucorales.This NOFO will use a milestone-driven, biphasic award mechanism.
Organisations can apply for a share of up to £30 million exclusive of VAT, for collaborative discovery phase projects that meet the round four challenges. This funding is from the Ofgem Strategic Innovation Fund. Eligibility summary. This competition is open to collaborations only. To lead a discovery phase project, you must be an Ofgem licensed:
Now, research led by the South Australian Health and Medical Research Institute (SAHMRI) has discovered a completely new type of bipotent progenitor cell with the potential to boost healing.
Learn more about research that meets the definition human subjects research, Federal regulation requirements, and whether your project may be considered exempt. Also, learn about NIH specific considerations and become more familiar with NIH policies, and other regulations as it relates to human subjects research protections.
Once the recipient institution makes findings of research misconduct involving NIH-supported research, NIH has a need to know, and the institution must immediately provide information on the affected research to the NIH. NIH may then work with the institution to determine what steps are necessary to ensure the integrity of ongoing research.