• Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

empirical research economics

Your purchase has been completed. Your documents are now available to view.

Methods Used in Economic Research: An Empirical Study of Trends and Levels

The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an operational model from theory and runs regressions. The number of papers published increases by 3.3% p.a. Two trends are highly significant: The fraction of theoretical papers has fallen by 26 pp (percentage points), while the fraction of papers using the classical method has increased by 15 pp. Economic theory predicts that such papers exaggerate, and the papers that have been analyzed by meta-analysis confirm the prediction. It is discussed if other methods have smaller problems.

1 Introduction

This paper studies the pattern in the research methods in economics by a sample of 3,415 regular papers published in the years 1997, 2002, 2007, 2012, and 2017 in 10 journals. The analysis builds on the beliefs that truth exists, but it is difficult to find, and that all the methods listed in the next paragraph have problems as discussed in Sections 2 and 4. Hereby I do not imply that all – or even most – papers have these problems, but we rarely know how serious it is when we read a paper. A key aspect of the problem is that a “perfect” study is very demanding and requires far too much space to report, especially if the paper looks for usable results. Thus, each paper is just one look at an aspect of the problem analyzed. Only when many studies using different methods reach a joint finding, we can trust that it is true.

Section 2 discusses the classification of papers by method into three main categories: (M1) Theory , with three subgroups: (M1.1) economic theory, (M1.2) statistical methods, and (M1.3) surveys. (M2) Experiments , with two subgroups: (M2.1) lab experiments and (M2.2) natural experiments. (M3) Empirics , with three subgroups: (M3.1) descriptive, (M3.2) classical empirics, and (M3.3) newer empirics. More than 90% of the papers are easy to classify, but a stochastic element enters in the classification of the rest. Thus, the study has some – hopefully random – measurement errors.

Section 3 discusses the sample of journals chosen. The choice has been limited by the following main criteria: It should be good journals below the top ten A-journals, i.e., my article covers B-journals, which are the journals where most research economists publish. It should be general interest journals, and the journals should be so different that it is likely that patterns that generalize across these journals apply to more (most?) journals. The Appendix gives some crude counts of researchers, departments, and journals. It assesses that there are about 150 B-level journals, but less than half meet the criteria, so I have selected about 15% of the possible ones. This is the most problematic element in the study. If the reader accepts my choice, the paper tells an interesting story about economic research.

All B-level journals try hard to have a serious refereeing process. If our selection is representative, the 150 journals have increased the annual number of papers published from about 7,500 in 1997 to about 14,000 papers in 2017, giving about 200,000 papers for the period. Thus, the B-level dominates our science. Our sample is about 6% for the years covered, but less than 2% of all papers published in B-journals in the period. However, it is a larger fraction of the papers in general interest journals.

It is impossible for anyone to read more than a small fraction of this flood of papers. Consequently, researchers compete for space in journals and for attention from the readers, as measured in the form of citations. It should be uncontroversial that papers that hold a clear message are easier to publish and get more citations. Thus, an element of sales promotion may enter papers in the form of exaggeration , which is a joint problem for all eight methods. This is in accordance with economic theory that predicts that rational researchers report exaggerated results; see Paldam ( 2016 , 2018 ). For empirical papers, meta-methods exist to summarize the results from many papers, notably papers using regressions. Section 4.4 reports that meta-studies find that exaggeration is common.

The empirical literature surveying the use of research methods is quite small, as I have found two articles only: Hamermesh ( 2013 ) covers 748 articles in 6 years a decade apart studies in three A-journals using a slightly different classification of methods, [1] while my study covers B-journals. Angrist, Azoulay, Ellison, Hill, and Lu ( 2017 ) use a machine-learning classification of 134,000 papers in 80 journals to look at the three main methods. My study subdivide the three categories into eight. The machine-learning algorithm is only sketched, so the paper is difficult to replicate, but it is surely a major effort. A key result in both articles is the strong decrease of theory in economic publications. This finding is confirmed, and it is shown that the corresponding increase in empirical articles is concentrated on the classical method.

I have tried to explain what I have done, so that everything is easy to replicate, in full or for one journal or one year. The coding of each article is available at least for the next five years. I should add that I have been in economic research for half a century. Some of the assessments in the paper will reflect my observations/experience during this period (indicated as my assessments). This especially applies to the judgements expressed in Section 4.

2 The eight categories

Table 1 reports that the annual number of papers in the ten journals has increased 1.9 times, or by 3.3% per year. The Appendix gives the full counts per category, journal, and year. By looking at data over two decades, I study how economic research develops. The increase in the production of papers is caused by two factors: The increase in the number of researchers. The increasing importance of publications for the careers of researchers.

The 3,415 papers

Year Papers Fraction Annual increase
From To In%
1997 464 13.6 1997 2002 2.2
2002 518 15.2 2002 2007 4.0
2007 661 19.4 2007 2012 4.6
2012 881 25.8 2012 2017 0.2
2017 891 26.1
Sum 3,415 100 1997 2017 3.3

2.1 (M1) Theory: subgroups (M1.1) to (M1.3)

Table 2 lists the groups and main numbers discussed in the rest of the paper. Section 2.1 discusses (M1) theory. Section 2.2 covers (M2) experimental methods, while Section 2.3 looks at (M3) empirical methods using statistical inference from data.

The 3,415 papers – fractions in percent

Three main groups Fraction Eight subgroups Fraction
(M1) Theory 49.6 (M1.1) Economic theory 45.2
(M1.2) Statistical technique, incl. forecasting 2.5
(M1.3) Surveys, incl. meta-studies 2.0
(M2) Experimental 6.4 (M2.1) Experiments in laboratories 5.7
(M2.2) Events, incl. real life experiments 0.7
(M3) Data inference 43.7 (M3.1) Descriptive, deductions from data 10.7
(M3.2) Classical empirical studies 28.5
(M3.3) Newer techniques 4.5

The change of the fractions from 1997 to 2017 in percentage points

Three main groups Change Eight subgroups Change
(M1) Theory −24.7 (M1.1) Economic theory −25.9
(M1.2) Statistical technique, incl. forecasting 2.2
(M1.3) Surveys, incl. meta-studies −1.0
(M2) Experimental 9.0 (M2.1) Experiments in laboratories 7.7
(M2.2) Events, incl. real life experiments 1.3
(M3) Data inference 15.8 (M3.1) Descriptive, deductions from data 2.4
(M3.2) Classical empirical studies 15.0
(M3.3) Newer techniques −1.7

Note: Section 3.4 tests if the pattern observed in Table 3 is statistically significant. The Appendix reports the full data.

2.1.1 (M1.1) Economic theory

Papers are where the main content is the development of a theoretical model. The ideal theory paper presents a (simple) new model that recasts the way we look at something important. Such papers are rare and obtain large numbers of citations. Most theoretical papers present variants of known models and obtain few citations.

In a few papers, the analysis is verbal, but more than 95% rely on mathematics, though the technical level differs. Theory papers may start by a descriptive introduction giving the stylized fact the model explains, but the bulk of the paper is the formal analysis, building a model and deriving proofs of some propositions from the model. It is often demonstrated how the model works by a set of simulations, including a calibration made to look realistic. However, the calibrations differ greatly by the efforts made to reach realism. Often, the simulations are in lieu of an analytical solution or just an illustration suggesting the magnitudes of the results reached.

Theoretical papers suffer from the problem known as T-hacking , [2] where the able author by a careful selection of assumptions can tailor the theory to give the results desired. Thus, the proofs made from the model may represent the ability and preferences of the researcher rather than the properties of the economy.

2.1.2 (M1.2) Statistical method

Papers reporting new estimators and tests are published in a handful of specialized journals in econometrics and mathematical statistics – such journals are not included. In our general interest journals, some papers compare estimators on actual data sets. If the demonstration of a methodological improvement is the main feature of the paper, it belongs to (M1.2), but if the economic interpretation is the main point of the paper, it belongs to (M3.2) or (M3.3). [3]

Some papers, including a special issue of Empirical Economics (vol. 53–1), deal with forecasting models. Such models normally have a weak relation to economic theory. They are sometimes justified precisely because of their eclectic nature. They are classified as either (M1.2) or (M3.1), depending upon the focus. It appears that different methods work better on different data sets, and perhaps a trade-off exists between the user-friendliness of the model and the improvement reached.

2.1.3 (M1.3) Surveys

When the literature in a certain field becomes substantial, it normally presents a motley picture with an amazing variation, especially when different schools exist in the field. Thus, a survey is needed, and our sample contains 68 survey articles. They are of two types, where the second type is still rare:

2.1.3.1 (M1.3.1) Assessed surveys

Here, the author reads the papers and assesses what the most reliable results are. Such assessments require judgement that is often quite difficult to distinguish from priors, even for the author of the survey.

2.1.3.2 (M1.3.2) Meta-studies

They are quantitative surveys of estimates of parameters claimed to be the same. Over the two decades from 1997 to 2017, about 500 meta-studies have been made in economics. Our sample includes five, which is 0.15%. [4] Meta-analysis has two levels: The basic level collects and codes the estimates and studies their distribution. This is a rather objective exercise where results seem to replicate rather well. [5] The second level analyzes the variation between the results. This is less objective. The papers analyzed by meta-studies are empirical studies using method (M3.2), though a few use estimates from (M3.1) and (M3.3).

2.2 (M2) Experimental methods: subgroups (M2.1) and (M2.2)

Experiments are of three distinct types, where the last two are rare, so they are lumped together. They are taking place in real life.

2.2.1 (M2.1) Lab experiments

The sample had 1.9% papers using this method in 1997, and it has expanded to 9.7% in 2017. It is a technique that is much easier to apply to micro- than to macroeconomics, so it has spread unequally in the 10 journals, and many experiments are reported in a couple of special journals that are not included in our sample.

Most of these experiments take place in a laboratory, where the subjects communicate with a computer, giving a controlled, but artificial, environment. [6] A number of subjects are told a (more or less abstract) story and paid to react in either of a number of possible ways. A great deal of ingenuity has gone into the construction of such experiments and in the methods used to analyze the results. Lab experiments do allow studies of behavior that are hard to analyze in any other way, and they frequently show sides of human behavior that are difficult to rationalize by economic theory. It appears that such demonstration is a strong argument for the publication of a study.

However, everything is artificial – even the payment. In some cases, the stories told are so elaborate and abstract that framing must be a substantial risk; [7] see Levitt and List ( 2007 ) for a lucid summary, and Bergh and Wichardt ( 2018 ) for a striking example. In addition, experiments cost money, which limits the number of subjects. It is also worth pointing to the difference between expressive and real behavior. It is typically much cheaper for the subject to “express” nice behavior in a lab than to be nice in the real world.

(M2.2) Event studies are studies of real world experiments. They are of two types:

(M2.2.1) Field experiments analyze cases where some people get a certain treatment and others do not. The “gold standard” for such experiments is double blind random sampling, where everything (but the result!) is preannounced; see Christensen and Miguel ( 2018 ). Experiments with humans require permission from the relevant authorities, and the experiment takes time too. In the process, things may happen that compromise the strict rules of the standard. [8] Controlled experiments are expensive, as they require a team of researchers. Our sample of papers contains no study that fulfills the gold standard requirements, but there are a few less stringent studies of real life experiments.

(M2.2.2) Natural experiments take advantage of a discontinuity in the environment, i.e., the period before and after an (unpredicted) change of a law, an earthquake, etc. Methods have been developed to find the effect of the discontinuity. Often, such studies look like (M3.2) classical studies with many controls that may or may not belong. Thus, the problems discussed under (M3.2) will also apply.

2.3 (M3) Empirical methods: subgroups (M3.1) to (M3.3)

The remaining methods are studies making inference from “real” data, which are data samples where the researcher chooses the sample, but has no control over the data generating process.

(M3.1) Descriptive studies are deductive. The researcher describes the data aiming at finding structures that tell a story, which can be interpreted. The findings may call for a formal test. If one clean test follows from the description, [9] the paper is classified under (M3.1). If a more elaborate regression analysis is used, it is classified as (M3.2). Descriptive studies often contain a great deal of theory.

Some descriptive studies present a new data set developed by the author to analyze a debated issue. In these cases, it is often possible to make a clean test, so to the extent that biases sneak in, they are hidden in the details of the assessments made when the data are compiled.

(M3.2) Classical empirics has three steps: It starts by a theory, which is developed into an operational model. Then it presents the data set, and finally it runs regressions.

The significance levels of the t -ratios on the coefficient estimated assume that the regression is the first meeting of the estimation model and the data. We all know that this is rarely the case; see also point (m1) in Section 4.4. In practice, the classical method is often just a presentation technique. The great virtue of the method is that it can be applied to real problems outside academia. The relevance comes with a price: The method is quite flexible as many choices have to be made, and they often give different results. Preferences and interests, as discussed in Sections 4.3 and 4.4 below, notably as point (m2), may affect these choices.

(M3.3) Newer empirics . Partly as a reaction to the problems of (M3.2), the last 3–4 decades have seen a whole set of newer empirical techniques. [10] They include different types of VARs, Bayesian techniques, causality/co-integration tests, Kalman Filters, hazard functions, etc. I have found 162 (or 4.7%) papers where these techniques are the main ones used. The fraction was highest in 1997. Since then it has varied, but with no trend.

I think that the main reason for the lack of success for the new empirics is that it is quite bulky to report a careful set of co-integration tests or VARs, and they often show results that are far from useful in the sense that they are unclear and difficult to interpret. With some introduction and discussion, there is not much space left in the article. Therefore, we are dealing with a cookbook that makes for rather dull dishes, which are difficult to sell in the market.

Note the contrast between (M3.2) and (M3.3): (M3.2) makes it possible to write papers that are too good, while (M3.3) often makes them too dull. This contributes to explain why (M3.2) is getting (even) more popular and the lack of success of (M3.3), but then, it is arguable that it is more dangerous to act on exaggerated results than on results that are weak.

3 The 10 journals

The 10 journals chosen are: (J1) Can [Canadian Journal of Economics], (J2) Emp [Empirical Economics], (J3) EER [European Economic Review], (J4) EJPE [European Journal of Political Economy], (J5) JEBO [Journal of Economic Behavior & Organization], (J6) Inter [Journal of International Economics], (J7) Macro [Journal of Macroeconomics], (J8) Kyklos, (J9) PuCh [Public Choice], and (J10) SJE [Scandinavian Journal of Economics].

Section 3.1 discusses the choice of journals, while Section 3.2 considers how journals deal with the pressure for publication. Section 3.3 shows the marked difference in publication profile of the journals, and Section 3.4 tests if the trends in methods are significant.

3.1 The selection of journals

They should be general interest journals – methodological journals are excluded. By general interest, I mean that they bring papers where an executive summary may interest policymakers and people in general. (ii) They should be journals in English (the Canadian Journal includes one paper in French), which are open to researchers from all countries, so that the majority of the authors are from outside the country of the journal. [11] (iii) They should be sufficiently different so that it is likely that patterns, which apply to these journals, tell a believable story about economic research. Note that (i) and (iii) require some compromises, as is evident in the choice of (J2), (J6), (J7), and (J8) ( Table 4 ).

The 10 journals covered

Journal Volume number Regular research papers published Growth
Code Name 1997 2002 2007 2012 2017 1997 2002 2007 2012 2017 All % p.a.
(J1) Can 30 35 40 45 50 68 43 55 66 46 278 −1.9
(J2) Emp 22 27 32–43 42–3 52–3 33 36 48 104 139 360 7.5
(J3) EER 41 46 51 56 91–100 56 91 89 106 140 482 4.7
(J4) EJPE 13 18 23 28 46–50 42 40 68 47 49 246 0.8
(J5) JEBO 32 47–9 62–4 82–4 133–44 41 85 101 207 229 663 9.0
(J6) Inter 42 56–8 71–3 86–8 104–9 45 59 66 87 93 350 3.7
(J7) Macro 19 24 29 34 51–4 44 25 51 79 65 264 2.0
(J8) Kyklos 50 55 60 65 70 21 22 30 29 24 126 0.7
(J9) PuCh 90–3 110–3 130–3 150–3 170–3 83 87 114 99 67 450 −1.1
(J10) SJE 99 104 109 114 119 31 30 39 57 39 196 1.2
All 464 518 661 881 891 3,415 3.3

Note. Growth is the average annual growth from 1997 to 2017 in the number of papers published.

Methodological journals are excluded, as they are not interesting to outsiders. However, new methods are developed to be used in general interest journals. From studies of citations, we know that useful methodological papers are highly cited. If they remain unused, we presume that it is because they are useless, though, of course, there may be a long lag.

The choice of journals may contain some subjectivity, but I think that they are sufficiently diverse so that patterns that generalize across these journals will also generalize across a broader range of good journals.

The papers included are the regular research articles. Consequently, I exclude short notes to other papers and book reviews, [12] except for a few article-long discussions of controversial books.

3.2 Creating space in journals

As mentioned in the introduction, the annual production of research papers in economics has now reached about 1,000 papers in top journals, and about 14,000 papers in the group of good journals. [13] The production has grown with 3.3% per year, and thus it has doubled the last twenty years. The hard-working researcher will read less than 100 papers a year. I know of no signs that this number is increasing. Thus, the upward trend in publication must be due to the large increase in the importance of publications for the careers of researchers, which has greatly increased the production of papers. There has also been a large increase in the number of researches, but as citations are increasingly skewed toward the top journals (see Heckman & Moktan, 2018 ), it has not increased demand for papers correspondingly. The pressures from the supply side have caused journals to look for ways to create space.

Book reviews have dropped to less than 1/3. Perhaps, it also indicates that economists read fewer books than they used to. Journals have increasingly come to use smaller fonts and larger pages, allowing more words per page. The journals from North-Holland Elsevier have managed to cram almost two old pages into one new one. [14] This makes it easier to publish papers, while they become harder to read.

Many journals have changed their numbering system for the annual issues, making it less transparent how much they publish. Only three – Canadian Economic Journal, Kyklos, and Scandinavian Journal of Economics – have kept the schedule of publishing one volume of four issues per year. It gives about 40 papers per year. Public Choice has a (fairly) consistent system with four volumes of two double issues per year – this gives about 100 papers. The remaining journals have changed their numbering system and increased the number of papers published per year – often dramatically.

Thus, I assess the wave of publications is caused by the increased supply of papers and not to the demand for reading material. Consequently, the study confirms and updates the observation by Temple ( 1918 , p. 242): “… as the world gets older the more people are inclined to write but the less they are inclined to read.”

3.3 How different are the journals?

The appendix reports the counts for each year and journal of the research methods. From these counts, a set of χ 2 -scores is calculated for the three main groups of methods – they are reported in Table 5 . It gives the χ 2 -test comparing the profile of each journal to the one of the other nine journals taken to be the theoretical distribution.

The methodological profile of the journals –  χ 2 -scores for main groups

Journal (M1) (M2) (M3) Sum -value
Code Name Theory Experiment Empirical (3)-test (%)
(J1) Can 7.4(+) 15.3(−) 1.7(−) 24.4 0.00
(J2) Emp 47.4(−) 16.0(−) 89.5(+) 152.9 0.00
(J3) EER 17.8(+) 0.3(−) 16.5(−) 34.4 0.00
(J4) EJPE 0.1(+) 11.2(−) 1.0(+) 12.2 0.31
(J5) JEBO 1.6(−) 1357.7(+) 41.1(−) 1404.4 0.00
(J6) Inter 2.4(+) 24.8(−) 0.1(+) 27.3 0.00
(J7) Macro 0.1(+) 18.2(−) 1.7(+) 20.0 0.01
(J8) Kyklos 20.1(−) 3.3(−) 31.2(+) 54.6 0.00
(J9) PuCh 0.0(+) 11.7(−) 2.2(+) 13.9 0.14
(J10) SJE 10.5(+) 1.8(−) 8.2(−) 20.4 0.01

Note: The χ 2 -scores are calculated relative to all other journals. The sign (+) or (−) indicates if the journal has too many or too few papers relatively in the category. The P -values for the χ 2 (3)-test always reject that the journal has the same methodological profile as the other nine journals.

The test rejects that the distribution is the same as the average for any of the journals. The closest to the average is the EJPE and Public Choice. The two most deviating scores are for the most micro-oriented journal JEBO, which brings many experimental papers, and of course, Empirical Economics, which brings many empirical papers.

3.4 Trends in the use of the methods

Table 3 already gave an impression of the main trends in the methods preferred by economists. I now test if these impressions are statistically significant. The tests have to be tailored to disregard three differences between the journals: their methodological profiles, the number of papers they publish, and the trend in the number. Table 6 reports a set of distribution free tests, which overcome these differences. The tests are done on the shares of each research method for each journal. As the data cover five years, it gives 10 pairs of years to compare. [15] The three trend-scores in the []-brackets count how often the shares go up, down, or stay the same in the 10 cases. This is the count done for a Kendall rank correlation comparing the five shares with a positive trend (such as 1, 2, 3, 4, and 5).

Trend-scores and tests for the eight subgroups of methods across the 10 journals

Journal (M1.1) (M1.2) (M1.3) (M2.1) (M2.2) (M3.1) (M3.2) (M3.3)
Code Name Theory Stat met Survey Exp. Event Descript. Classical Newer
(J1) Can [6, 3, 1] [6, 3, 1] [3, 1, 6] [3, 1, 6] [6, 4, 0] [8, 2, 0] [5, 4, 1]
(J2) Emp [2, 8, 0] [6, 4, 0] [0, 7, 3] [0, 4, 6] [3, 4, 3] [6, 4, 0] [8, 2, 0] [4, 6, 0]
(J3) EER [3, 7, 0] [4, 0, 6] [3, 1, 6] [7, 3, 0] [8, 2, 0] [3, 7, 0]
(J4) EJPE [0, 0, 10] [4, 0, 6] [4, 0, 6] [4, 6, 0] [8, 1, 0]
(J5) JEBO [2, 8, 0] [6, 1, 3] [6, 3, 1] [7, 3, 0] [6, 1, 3] [4, 6, 0] [8, 2, 0] [2, 4, 3]
(J6) Inter [0, 0, 10] [0, 0, 10] [0, 0, 10] [0, 0, 10] [8, 2, 0] [8, 2, 0] [4, 6, 0]
(J7) Macro [6, 4, 0] [5, 5, 0] [7, 2, 1] [0, 0, 10] [0, 0, 10] [3, 7, 0]
(J8) Kyklos [2, 8, 0] [0, 0, 10] [2, 2, 6] [2, 7, 1] [0, 0, 10] [4, 6, 0] [2, 2, 6]
(J9) PuCh [3, 7, 0] [4, 3, 3] [6, 3, 1] [4, 3, 3] [0, 0, 10] [5, 5, 0] [6, 4, 0] [6, 3, 1]
(J10) SJE [4, 0, 6] [6, 3, 1] [1, 3, 6] [3, 1, 6] [6, 4, 0] [6, 4, 0] [6, 1, 1]
All 100 per col. [22, 78, 0] [35, 16, 49] [35, 41, 24] [30, 22, 48] [22, 8, 70] [59, 41, 0] [73, 27, 0] [42, 43, 13]
Binominal test 56% 33% 8.86% 100%

Note: The three trend-scores in each [ I 1 , I 2 , I 3 ]-bracket are a Kendall-count over all 10 combinations of years. I 1 counts how often the share goes up. I 2 counts when the share goes down, and I 3 counts the number of ties. Most ties occur when there are no observations either year. Thus, I 1 + I 2 + I 3 = 10. The tests are two-sided binominal tests disregarding the zeroes. The test results in bold are significant at the 5% level.

The first set of trend-scores for (M1.1) and (J1) is [1, 9, 0]. It means that 1 of the 10 share-pairs increases, while nine decrease and no ties are found. The two-sided binominal test is 2%, so it is unlikely to happen. Nine of the ten journals in the (M1.1)-column have a majority of falling shares. The important point is that the counts in one column can be added – as is done in the all-row; this gives a powerful trend test that disregards differences between journals and the number of papers published. ( Table A1 )

Four of the trend-tests are significant: The fall in theoretical papers and the rise in classical papers. There is also a rise in the share of stat method and event studies. It is surprising that there is no trend in the number of experimental studies, but see Table A2 (in Appendix).

4 An attempt to interpret the pattern found

The development in the methods pursued by researchers in economics is a reaction to the demand and supply forces on the market for economic papers. As already argued, it seems that a key factor is the increasing production of papers.

The shares add to 100, so the decline of one method means that the others rise. Section 4.1 looks at the biggest change – the reduction in theory papers. Section 4.2 discusses the rise in two new categories. Section 4.3 considers the large increase in the classical method, while Section 4.4 looks at what we know about that method from meta-analysis.

4.1 The decline of theory: economics suffers from theory fatigue [16]

The papers in economic theory have dropped from 59.5 to 33.6% – this is the largest change for any of the eight subgroups. [17] It is highly significant in the trend test. I attribute this drop to theory fatigue.

As mentioned in Section 2.1, the ideal theory paper presents a (simple) new model that recasts the way we look at something important. However, most theory papers are less exciting: They start from the standard model and argue that a well-known conclusion reached from the model hinges upon a debatable assumption – if it changes, so does the conclusion. Such papers are useful. From a literature on one main model, the profession learns its strengths and weaknesses. It appears that no generally accepted method exists to summarize this knowledge in a systematic way, though many thoughtful summaries have appeared.

I think that there is a deeper problem explaining theory fatigue. It is that many theoretical papers are quite unconvincing. Granted that the calculations are done right, believability hinges on the realism of the assumptions at the start and of the results presented at the end. In order for a model to convince, it should (at least) demonstrate the realism of either the assumptions or the outcome. [18] If both ends appear to hang in the air, it becomes a game giving little new knowledge about the world, however skillfully played.

The theory fatigue has caused a demand for simulations demonstrating that the models can mimic something in the world. Kydland and Prescott pioneered calibration methods (see their 1991 ). Calibrations may be carefully done, but it often appears like a numerical solution of a model that is too complex to allow an analytical solution.

4.2 Two examples of waves: one that is still rising and another that is fizzling out

When a new method of gaining insights in the economy first appears, it is surrounded by doubts, but it also promises a high marginal productivity of knowledge. Gradually the doubts subside, and many researchers enter the field. After some time this will cause the marginal productivity of the method to fall, and it becomes less interesting. The eight methods include two newer ones: Lab experiments and newer stats. [19]

It is not surprising that papers with lab experiments are increasing, though it did take a long time: The seminal paper presenting the technique was Smith ( 1962 ), but only a handful of papers are from the 1960s. Charles Plott organized the first experimental lab 10 years later – this created a new standard for experiments, but required an investment in a lab and some staff. Labs became more common in the 1990s as PCs got cheaper and software was developed to handle experiments, but only 1.9% of the papers in the 10 journals reported lab experiments in 1997. This has now increased to 9.7%, so the wave is still rising. The trend in experiments is concentrated in a few journals, so the trend test in Table 6 is insignificant, but it is significant in the Appendix Table A2 , where it is done on the sum of articles irrespective of the journal.

In addition to the rising share of lab experiment papers in some journals, the journal Experimental Economics was started in 1998, where it published 281 pages in three issues. In 2017, it had reached 1,006 pages in four issues, [20] which is an annual increase of 6.5%.

Compared with the success of experimental economics, the motley category of newer empirics has had a more modest success, as the fraction of papers in the 5 years are 5.8, 5.2, 3.5, 5.4, and 4.2, which has no trend. Newer stats also require investment, but mainly in human capital. [21] Some of the papers using the classical methodology contain a table with Dickey-Fuller tests or some eigenvalues of the data matrix, but they are normally peripheral to the analysis. A couple of papers use Kalman filters, and a dozen papers use Bayesian VARs. However, it is clear that the newer empirics have made little headway into our sample of general interest journals.

4.3 The steady rise of the classical method: flexibility rewarded

The typical classical paper provides estimates of a key effect that decision-makers outside academia want to know. This makes the paper policy relevant right from the start, and in many cases, it is possible to write a one page executive summary to the said decision-makers.

The three-step convention (see Section 2.3) is often followed rather loosely. The estimation model is nearly always much simpler than the theory. Thus, while the model can be derived from a theory, the reverse does not apply. Sometimes, the model seems to follow straight from common sense, and if the link from the theory to the model is thin, it begs the question: Is the theory really necessary? In such cases, it is hard to be convinced that the tests “confirm” the theory, but then, of course, tests only say that the data do not reject the theory.

The classical method is often only a presentation devise. Think of a researcher who has reached a nice publishable result through a long and tortuous path, including some failed attempts to find such results. It is not possible to describe that path within the severely limited space of an article. In addition, such a presentation would be rather dull to read, and none of us likes to talk about wasted efforts that in hindsight seem a bit silly. Here, the classical method becomes a convenient presentation device.

The biggest source of variation in the results is the choice of control/modifier variables. All datasets presumably contain some general and some special information, where the latter depends on the circumstances prevailing when the data were compiled. The regression should be controlled for these circumstances in order to reach the general result. Such ceteris paribus controls are not part of the theory, so many possible controls may be added. The ones chosen for publication often appear to be the ones delivering the “right” results by the priors of the researcher. The justification for their inclusion is often thin, and if two-stage regressions are used, the first stage instruments often have an even thinner justification.

Thus, the classical method is rather malleable to the preferences and interests of researchers and sponsors. This means that some papers using the classical technique are not what they pretend, as already pointed out by Leamer ( 1983 ), see also Paldam ( 2018 ) for new references and theory. The fact that data mining is tempting suggests that it is often possible to reach smashing results, making the paper nice to read. This may be precisely why it is cited.

Many papers using the classical method throw in some bits of exotic statistics technique to demonstrate the robustness of the result and the ability of the researcher. This presumably helps to generate credibility.

4.4 Knowledge about classical papers reached from meta-studies

(m1) The range of the estimates is typically amazingly large, given the high -ratios reported. This confirms that -ratios are problematic as claimed in Section 2.3.
(m2) Publication biases (exaggerations) are common, i.e., meta-analyses routinely reject the null hypothesis of no publication bias. My own crude rule of thumb is that exaggeration is by a factor two – the two meta–meta studies cited give some support to this rule.
(m3) The meta-average estimated from all studies normally converges, and for > 30, the meta-average normally stabilizes to a well-defined value, see Doucouliagos et al. ( ).

Individual studies using the classical method often look better than they are, and thus they are more uncertain than they appear, but we may think of the value of convergence for large N s (number of observations) as the truth. The exaggeration is largest in the beginning of a new literature, but gradually it becomes smaller. Thus, the classical method does generate truth when the effect searched for has been studied from many sides. The word research does mean that the search has to be repeated! It is highly risky to trust a few papers only.

Meta-analysis has found other results such as: Results in top journals do not stand out. It is necessary to look at many journals, as many papers on the same effect are needed. Little of the large variation between results is due to the choice of estimators.

A similar development should occur also for experimental economics. Experiments fall in families: A large number cover prisoner’s dilemma games, but there are also many studies of dictator games, auction games, etc. Surveys summarizing what we have learned about these games seem highly needed. Assessed summaries of old experiments are common, notably in introductions to papers reporting new ones. It should be possible to extract the knowledge reached by sets of related lab experiments in a quantitative way, by some sort of meta-technique, but this has barely started. The first pioneering meta-studies of lab experiments do find the usual wide variation of results from seemingly closely related experiments. [25] A recent large-scale replicability study by Camerer et al. ( 2018 ) finds that published experiments in the high quality journal Nature and Science exaggerate by a factor two just like regression studies using the classical method.

5 Conclusion

The study presents evidence that over the last 20 years economic research has moved away from theory towards empirical work using the classical method.

From the eighties onward, there has been a steady stream of papers pointing out that the classical method suffers from excess flexibility. It does deliver relevant results, but they tend to be too good. [26] While, increasingly, we know the size of the problems of the classical method, systematic knowledge about the problems of the other methods is weaker. It is possible that the problems are smaller, but we do not know.

Therefore, it is clear that obtaining solid knowledge about the size of an important effect requires a great deal of papers analyzing many aspects of the effect and a careful quantitative survey. It is a well-known principle in the harder sciences that results need repeated independent replication to be truly trustworthy. In economics, this is only accepted in principle.

The classical method of empirical research is gradually winning, and this is a fine development: It does give answers to important policy questions. These answers are highly variable and often exaggerated, but through the efforts of many competing researchers, solid knowledge will gradually emerge.

Home page: http://www.martin.paldam.dk

Acknowledgments

The paper has been presented at the 2018 MAER-Net Colloquium in Melbourne, the Kiel Aarhus workshop in 2018, and at the European Public Choice 2019 Meeting in Jerusalem. I am grateful for all comments, especially from Chris Doucouliagos, Eelke de Jong, and Bob Reed. In addition, I thank the referees for constructive advice.

Conflict of interest: Author states no conflict of interest.

Appendix: Two tables and some assessments of the size of the profession

The text needs some numbers to assess the representativity of the results reached. These numbers just need to be orders of magnitude. I use the standard three-level classification in A, B, and C of researchers, departments, and journals. The connections between the three categories are dynamic and rely on complex sorting mechanisms. In an international setting, it matters that researchers have preferences for countries, notably their own. The relation between the three categories has a stochastic element.

The World of Learning organization reports on 36,000 universities, colleges, and other institutes of tertiary education and research. Many of these institutions are mainly engaged in undergraduate teaching, and some are quite modest. If half of these institutions have a program in economics, with a staff of at least five, the total stock of academic economists is 100,000, of which most are at the C-level.

The A-level of about 500 tenured researchers working at the top ten universities (mainly) publishes in the top 10 journals that bring less than 1,000 papers per year; [27] see Heckman and Moktan (2020). They (mainly) cite each other, but they greatly influence other researchers. [28] The B-level consists of about 15–20,000 researchers who work at 4–500 research universities, with graduate programs and ambitions to publish. They (mainly) publish in the next level of about 150 journals. [29] In addition, there are at least another 1,000 institutions that strive to move up in the hierarchy.

The counts for each of the 10 journals

Main group (M1) (M2) (M3)
Subgroup (M1.1) (M1.2) (M1.3) (M2.1) (M2.2) (M3.1) (M3.2) (M3.3)
Number papers Theory Stat. theory Surveys meta Experiments Event studies Descriptive Classical empiric Newer empiric
(J1) Can 68 47 2 10 8 1
(J2) Emp 33 11 5 1 7 3 6
(J3) EER 56 34 3 4 12 3
(J4) EJPE 42 29 2 5 6
(J5) JEBO 41 26 7 3 5
(J6) Inter 45 35 1 7 2
(J7) Macro 44 18 1 10 15
(J8) Kyklos 21 10 1 4 6
(J9) PuCh 83 40 7 1 1 8 26
(J10) SJE 31 26 1 4
(J1) Can 43 27 1 5 7 3
(J2) Emp 36 1 14 1 4 7 9
(J3) EER 91 63 4 3 4 17
(J4) EJPE 40 27 2 2 9
(J5) JEBO 85 52 3 14 10 5 1
(J6) Inter 59 40 4 9 6
(J7) Macro 25 8 2 1 6 8
(J8) Kyklos 22 6 1 2 13
(J9) PuCh 87 39 2 1 14 31
(J10) SJE 30 18 2 10
(J1) Can 55 26 4 6 17 2
(J2) Emp 48 4 8 3 23 10
(J3) EER 89 55 2 1 8 20 3
(J4) EJPE 68 36 2 9 20 1
(J5) JEBO 101 73 10 3 3 12
(J6) Inter 66 39 4 21 2
(J7) Macro 51 30 1 6 10 4
(J8) Kyklos 30 2 1 6 20 1
(J9) PuCh 114 53 4 19 38
(J10) SJE 39 29 1 1 2 6
(J1) Can 66 33 1 1 1 8 21 1
(J2) Emp 104 8 16 17 38 25
(J3) EER 106 56 7 1 7 33 2
(J4) EJPE 47 12 1 2 31 1
(J5) JEBO 207 75 2 9 50 17 52 2
(J6) Inter 87 36 17 33 1
(J7) Macro 79 32 2 3 12 14 16
(J8) Kyklos 29 8 2 19
(J9) PuCh 99 47 2 2 48
(J10) SJE 57 32 2 1 22
(J1) Can 46 20 1 5 9 9 2
(J2) Emp 139 1 25 4 30 60 19
(J3) EER 140 75 1 1 16 13 32 2
(J4) EJPE 49 14 2 1 4 27 1
(J5) JEBO 229 66 1 3 63 9 11 76
(J6) Inter 93 42 10 33 8
(J7) Macro 65 28 1 9 10 13 4
(J8) Kyklos 24 1 1 3 19
(J9) PuCh 67 33 1 3 10 20
(J10) SJE 39 19 1 1 1 4 12 1

Counts, shares, and changes for all ten journals for subgroups

Number (M1.1) (M1.2) (M1.3) (M2.1) (M2.2) (M3.1) (M3.2) (M3.3)
Year I: Sum of counts
1997 464 276 5 15 9 2 43 87 27
2002 518 281 19 11 21 0 45 114 27
2007 661 347 10 9 15 4 66 187 23
2012 881 339 21 13 62 3 106 289 48
2017 891 299 29 20 86 15 104 301 37
All years 3,415 1,542 84 68 193 24 364 978 162
Year II: Average fraction in per cent
1997 100 59.5 1.1 3.2 1.9 0.4 9.3 18.8 5.8
2002 100 54.2 3.7 2.1 4.1 8.7 22.0 5.2
2007 100 52.5 1.5 1.4 2.3 0.6 10.0 28.3 3.5
2012 100 38.5 2.4 1.5 7.0 0.3 12.0 32.8 5.4
2017 100 33.6 3.3 2.2 9.7 1.7 11.7 33.8 4.2
All years 100 45.2 2.5 2.0 5.7 0.7 10.7 28.6 4.7
Trends-scores [0, 10, 0] [7, 3, 0] [4, 6, 0] [9, 1, 0] [5, 5, 0] [8, 2, 0] [10, 0, 0] [3, 7, 0]
Binominal test 34 37 100 11 34
From To III: Change of fraction in percentage points
1997 2002 −5.2 2.6 −1.1 2.1 −0.4 −0.6 3.3 −0.6
2002 2007 −1.8 −2.2 −0.8 −1.8 0.6 1.3 6.3 −1.7
2007 2012 −14.0 0.9 0.1 4.8 −0.3 2.0 4.5 2.0
2012 2017 −4.9 0.9 0.8 2.6 1.3 −0.4 1.0 −1.3
1997 2017 −25.9 2.2 −1.0 7.7 1.3 2.4 15.0 −1.7

Note: The trend-scores are calculated as in Table 6 . Compared to the results in Table 6 , the results are similar, but the power is less than before. However, note that the results in Column (M2.1) dealing with experiments are stronger in Table A2 . This has to do with the way missing observations are treated in the test.

Angrist, J. , Azoulay, P. , Ellison, G. , Hill, R. , & Lu, S. F. (2017). Economic research evolves: Fields and styles. American Economic Review (Papers & Proceedings), 107, 293–297. 10.1257/aer.p20171117 Search in Google Scholar

Bergh, A. , & Wichardt, P. C. (2018). Mine, ours or yours? Unintended framing effects in dictator games (INF Working Papers, No 1205). Research Institute of Industrial Econ, Stockholm. München: CESifo. 10.2139/ssrn.3208589 Search in Google Scholar

Brodeur, A. , Cook, N. , & Heyes, A. (2020). Methods matter: p-Hacking and publication bias in causal analysis in economics. American Economic Review, 110(11), 3634–3660. 10.1257/aer.20190687 Search in Google Scholar

Camerer, C. F. , Dreber, A. , Holzmaster, F. , Ho, T.-H. , Huber, J. , Johannesson, M. , … Wu, H. (27 August 2018). Nature Human Behaviour. https://www.nature.com/articles/M2.11562-018-0399-z Search in Google Scholar

Card, D. , & DellaVigna, A. (2013). Nine facts about top journals in economics. Journal of Economic Literature, 51, 144–161 10.3386/w18665 Search in Google Scholar

Christensen, G. , & Miguel, E. (2018). Transparency, reproducibility, and the credibility of economics research. Journal of Economic Literature, 56, 920–980 10.3386/w22989 Search in Google Scholar

Doucouliagos, H. , Paldam, M. , & Stanley, T. D. (2018). Skating on thin evidence: Implications for public policy. European Journal of Political Economy, 54, 16–25 10.1016/j.ejpoleco.2018.03.004 Search in Google Scholar

Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14, 583–610 10.1007/s10683-011-9283-7 Search in Google Scholar

Fiala, L. , & Suentes, S. (2017). Transparency and cooperation in repeated dilemma games: A meta study. Experimental Economics, 20, 755–771 10.1007/s10683-017-9517-4 Search in Google Scholar

Friedman, M. (1953). Essays in positive economics. Chicago: University of Chicago Press. Search in Google Scholar

Hamermesh, D. (2013). Six decades of top economics publishing: Who and how? Journal of Economic Literature, 51, 162–172 10.3386/w18635 Search in Google Scholar

Heckman, J. J. , & Moktan, S. (2018). Publishing and promotion in economics: The tyranny of the top five. Journal of Economic Literature, 51, 419–470 10.3386/w25093 Search in Google Scholar

Ioannidis, J. P. A. , Stanley, T. D. , & Doucouliagos, H. (2017). The power of bias in economics research. Economic Journal, 127, F236–F265 10.1111/ecoj.12461 Search in Google Scholar

Johansen, S. , & Juselius, K. (1990). Maximum likelihood estimation and inference on cointegration – with application to the demand for money. Oxford Bulletin of Economics and Statistics, 52, 169–210 10.1111/j.1468-0084.1990.mp52002003.x Search in Google Scholar

Justman, M. (2018). Randomized controlled trials informing public policy: Lessons from the project STAR and class size reduction. European Journal of Political Economy, 54, 167–174 10.1016/j.ejpoleco.2018.04.005 Search in Google Scholar

Kydland, F. , & Prescott, E. C. (1991). The econometrics of the general equilibrium approach to business cycles. Scandinavian Journal of Economics, 93, 161–178 10.2307/3440324 Search in Google Scholar

Leamer, E. E. (1983). Let’s take the con out of econometrics. American Economic Review, 73, 31–43 Search in Google Scholar

Levitt, S. D. , & List, J. A. (2007). On the generalizability of lab behaviour to the field. Canadian Journal of Economics, 40, 347–370 10.1111/j.1365-2966.2007.00412.x Search in Google Scholar

Paldam, M. (April 14th 2015). Meta-analysis in a nutshell: Techniques and general findings. Economics. The Open-Access, Open-Assessment E-Journal, 9, 1–4 10.5018/economics-ejournal.ja.2015-11 Search in Google Scholar

Paldam, M. (2016). Simulating an empirical paper by the rational economist. Empirical Economics, 50, 1383–1407 10.1007/s00181-015-0971-6 Search in Google Scholar

Paldam, M. (2018). A model of the representative economist, as researcher and policy advisor. European Journal of Political Economy, 54, 6–15 10.1016/j.ejpoleco.2018.03.005 Search in Google Scholar

Smith, V. (1962). An experimental study of competitive market behavior. Journal of Political Economy, 70, 111–137 10.1017/CBO9780511528354.003 Search in Google Scholar

Stanley, T. D. , & Doucouliagos, H. (2012). Meta-regression analysis in economics and business. Abingdon: Routledge. 10.4324/9780203111710 Search in Google Scholar

Temple, C. L. (1918). Native races and their rulers; sketches and studies of official life and administrative problems in Nigeria. Cape Town: Argus Search in Google Scholar

© 2021 Martin Paldam, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

  • X / Twitter

Supplementary Materials

  • Supplementary material

Please login or register with De Gruyter to order this product.

Economics

Journal and Issue

Articles in the same issue.

empirical research economics

Empirical Strategies in Economics: Illuminating the Path from Cause to Effect

The view that empirical strategies in economics should be transparent and credible now goes almost without saying. The local average treatment effects (LATE) framework for causal inference helped make this so. The LATE theorem tells us for whom particular instrumental variables (IV) and regression discontinuity estimates are valid. This lecture uses several empirical examples, mostly involving charter and exam schools, to highlight the value of LATE. A surprising exclusion restriction, an assumption central to the LATE interpretation of IV estimates, is shown to explain why enrollment at Chicago exam schools reduces student achievement. I also make two broader points: IV exclusion restrictions formalize commitment to clear and consistent explanations of reduced-form causal effects; compelling applications demonstrate the power of simple empirical strategies to generate new causal knowledge.

This is a revised version of my recorded Nobel Memorial Lecture posted December 8, 2021. Many thanks to Jimmy Chin and Vendela Norman for their help preparing this lecture and to Noam Angrist, Hank Farber, Peter Ganong, Guido Imbens, and Parag Pathak for comments on an earlier draft. Thanks also go to my coauthors and Blueprint Labs colleagues, from whom I’ve learned so much over the years. Special thanks are due to my co-laureates, David Card and Guido Imbens, for their guidance and partnership. We three share a debt to our absent friend, Alan Krueger, with whom we collaborated so fruitfully. This lecture incorporates empirical findings from joint work with Atila Abdulkadiroğlu, Sue Dynarski, Bill Evans, Iván Fernández-Val, Tom Kane, Victor Lavy, Yusuke Narita, Parag Pathak, Chris Walters, and Román Zárate. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.

The work discussed here was funded in part by the Laura and John Arnold Foundation, the National Science Foundation, and the W.T. Grant Foundation. Joshua Angrist's daughter teaches in a Boston charter school.

MARC RIS BibTeΧ

Download Citation Data

Published Versions

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

2024, 16th Annual Feldstein Lecture, Cecilia E. Rouse," Lessons for Economists from the Pandemic" cover slide

This website uses cookies.

By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device to enhance site navigation and analyze site performance and traffic. For more information on our use of cookies, please see our Privacy Policy .

  • Research Highlights

An empirical turn in economics research

Research Highlights Featured Chart

June 26, 2017

empirical research economics

A table of results in an issue of the American Economic Review.

Gian Romagnoli

Over the past few decades, economists have increasingly been cited in the press and sought by Congress to give testimony on the issues of the day. This could be due in part to the increasingly empirical nature of economics research.

Aided by internet connections that allow datasets to be assembled from disparate sources and cheap computing power to crunch the numbers, economists are more and more often turning to real-world data to complement and test theoretical models.

This trend was documented in a 2013 article from the Journal of Economic Literature that showed, in a sample of 748 academic journal articles in top economics journals, that empirical work has become much more common since the 1960s.

In the spirit of empirical inquiry, the authors of a study appearing in the May issue of the American Economic Review: Papers & Proceedings used machine learning techniques to expand this analysis to a much larger set of 135,000 papers published across 80 academic journals cited frequently in the American Economic Review .

empirical research economics

Figure 4  from Angrist et al. (2017)

Sorting hundreds of thousands of papers into “theoretical” and “empirical” piles by hand would be prohibitive, so authors Joshua Angrist , Pierre Azoulay , Glenn Ellison , Ryan Hill, and Susan Feng Lu use latent Dirichlet allocation and logistic ridge regression to analyze the wording of titles and abstracts and assign each paper to a category.

Based on a smaller group of five thousand papers classified by research assistants, the algorithm learns what keywords are associated with empirical work and theoretical work and then can quickly classifies thousands of other papers that weren’t reviewed directly by the researchers.

The figure above shows the prevalence of empirical work as determined by the authors’ model has been rising across fields since 1980. The authors note that the empirical turn is not a result of certain more empirical fields overtaking other more theoretical ones, but instead every field becoming more empirically-minded.

Researching and writing for Economics students

3 economics: methods, approaches, fields and relevant questions, 3.1 economic theory and empirical work: what is it.

What is economic theory and what can it do?

Unlike “theory” in some other social science disciplines, economic theory is mostly based on mathematical modelling and rigorous proof that certain conclusions or results can be derived from certain assumptions. But theory alone can say little about the real world.

In Economics: Models = Theory = Mathematics… for the most part.

What is empirical work and what can it do?

In contrast, empirical work gathers evidence from the real world, usually organized into systematic data sets.

It tries to bring evidence to refute or substantiate economics theory,

It tries to estimate parameters such as price elasticity or the government spending multiplier in specific contexts

It rigorously presents broad “stylized facts”, providing a clear picture of a market, industry, or situation

Much empirical work itself relies on assumptions, either assumptions from economic theory, or assumptions about the data itself, or both. But empirical work does not “prove” anything. Instead, it presents evidence in favour of or against certain hypotheses, estimates parameters, and can, using the classical statistical framework, reject (or fail to reject) certain null hypotheses. What “rejecting” means is “if the assumptions underlying my estimation technique are correct, then it is highly unlikely that the null hypothesis holds.”

3.2 Normative vs. Positive

The word ‘Normative’, also called ‘prescriptive’, often refers to what ought to be, what an ideal policy would be, or how to think about judging whether this is a justifiable welfare function.

“Positive” work claims to be value-neutral and to address what is or what must be going on in the real world. Most modern economists would probably claim their work is “positive”, and in this sense, “prescriptive” is often used as a pejorative, In my experience. However, prescriptive papers can be very valuable if done well.

Note: There is also another context in which you will hear the expression ‘normative analysis.’ This may also be used to describe microeconomic analysis derived from the axioms of rational optimising behavior; this describes much of what you have covered in your textbook. This dual meaning of the word ‘normative’ is admittedly confusing!

3.3 Theoretical vs. Empirical (techniques)

Papers that use theory (modeling) as a technique typically start from a series of assumptions and try to derive results simply from these assumptions. They may motivate their focus or assumptions using previous empirical work and anecdotes, but these papers do not use themselves data nor do they do what we call “econometrics”. Remember that in Economics “theory papers” are usually highly mathematical and formal.

Empirical papers use evidence from the real world, usually to test hypotheses, but also to generate description and help formulate ideas and hypotheses.

3.4 Theoretical vs. Applied (focus)

“Theoretical” can also be used to describe a paper’s focus; a theoretical paper in this sense will address fundamentals of economic modeling. In theory, these may be widely applied across a range of fields, but they do not typically address any single issue of policy or focus on a specific industry. These papers are often very difficult to read and there is argument about whether many such papers will ultimately “trickle-down” to having practical use. These papers typically used theory and modeling techniques rather than empirics. However some empirical papers may be aimed at addressing fundamental theoretical issues and parameters.

Papers with an “applied” focus will directly target a policy issue or a puzzle or question about the functioning of certain market or nonmarket interactions. Nearly all of the papers you will read and work on as an undergraduate are “applied” in this sense.

3.5 Categories of empirical approaches

“causal” vs. “descriptive”.

“Causal” papers try to get at whether one factor or event “A” can be seen to directly “cause” an outcome “B”. For example, “does an individual getting more years of schooling lead him or her to have higher income, on average?” A good way to think about this conception of causality is to consider the counterfactual: if a typical person who received a university degree had been randomly selected to not get this education, would her income have been lower than it now is? Similarly (but not identically) if the typical person without his education had been randomly input into a university program, would her income now be greater?

Since the real world does not usually present such clean experiments, “causal” empirical researchers rely on various techniques which usually themselves depend on" identification assumptions." See, for example, control strategies, difference in difference, and instrumental variables techniques.

“Descriptive” papers essentially aim to present a picture of “what the data looks like” in an informative way. Causal relationships may be suggested but the authors are not making a strong claim that they can identify these. They may present a data-driven portrait of an industry, of wealth and inequality in a country or globally over time, of particular patterns and trends in consumption, of a panel of governments’ monetary and fiscal policy, etc. They may focus on the ‘functional form’ of relationships in the data and the ‘residual’ or ‘error structure. They may hint at causal relationships or propose a governing model. They may identify a ’puzzle’ in the data (e.g., the ‘equity premium puzzle’) and propose potential explanations. They may use the data to ‘provide support’ for these explanations. 5 They may devote much of the paper to providing a theoretical explanation (remember, in Economics these are usually mathematical models) for the pattern. They may also run statistical tests and report confidence intervals; one can establish a ‘statistically significant’ relationship between two variables even if the relationship is not (necessarily) causal. This is particular important when one sees the data as subject to measurement error and/or as a sample from a larger population. E.g., just because age and wealth (or height and head-size, or political affiliation and food-preference) are strongly related to one another in a random representative sample of 10 people does not mean they are strongly related to one another in the entire population . 6

Structural vs. Reduced Form

This is a rather complicated issue, and there are long debates over the merits of each approach.

In brief, structural empirical papers might be said to use theory to derive necessary relationships between variables and appropriate functional forms, often as part of a system of questions describing a broad model. They then “take this model to the data”, and estimate certain parameters; these estimates rely on the key structural assumptions and the chosen functional form (which is often selected for convenience) holding in the real world. They may also try to check how “robust” the estimates are to alternate assumptions and forms. Structural estimates can then be used to make precise predictions and welfare calculations.

Reduced form work may begin with some theoretical modeling but it will not usually try to estimate the model directly. Reduced form work often involves estimating single equations which may be “partial equilibrium”, and they may often use linear regression and interpret it as a “best linear approximation” to the true unknown functional form. Reduced form researchers often claim that results are “more robust” than structural work, while proponents of structural work may claim that reduced form econometrics is not theoretically grounded and thus meaningless.

Most of you are likely to focus on reduced form empirical work.

Quantitative vs. qualitative (the latter is rare in economics)

Quantitative research deals with data that can be quantified, i.e., expressed in terms of numbers and strict categories, often with hierarchical relationships.

Qualitative research is rarely done in modern economics. It relies on “softer” forms of data like interviews that cannot be reduced to a number or parameter, and cannot be dealt with using statistical techniques.

3.6 Methodological research

Methodological research is aimed at producing and evaluating techniques and approaches that can be used by other researchers. Most methodological research in economics is done by econometricians, who develop and evaluate techniques for estimating relationships using data.

3.7 Fields of economics, and some classic questions asked in each field

Economics is about choices under conditions of scarcity, the interaction of individuals governments and firms, and the consequences of these. [citation needed]

Microeconomics

Preferences and choices under constraints ; e.g., “how do risk-averse individuals choose among a set of uncertain gambles?” … “How does consumption of leisure change in response to an increase in the VAT?”

Game theory, interactions ; … “How do individuals coordinate in ‘stag hunt’ games, and are these equilibria robust to small errors?”

Mechanism design and contract theory ; … “How can a performance scheme be designed to induce the optimal level of effort with asymmetric information about ability?”

Equilibrium ; … “Is the general equilibrium of an economy with indivisible goods Pareto optimal?”

Macroeconomics

Stabilisation ; … “how do changes in the level of government spending affect the changes in the rate of unemployment?”

Growth ; …“Why did GDP per capita increase in Western Europe between 1950 and 1980?”

Aggregates, stocks, and flows ; … “Does a trade deficit lead to the government budget deficit, or vice/versa, (or both, or neither)?”

Money and Banking … “Does deposit insurance decrease the likelihood of a bank run?”

Financial Economics (not as broad as the first two)

“Can an investor use publicly-available information be used to systematically earn supernormal profits?” (the Efficient Markets Hypothesis)

Econometrics (methods/technique)

“What is the lowest mean squared error unbiased estimator of a gravity equation?”

Experimental economics (a technique)

Do laboratory subjects (usually students) coordinate on the efficient equilibrium in `stag hunt’ games? Do stronger incentives increase their likelihood this type of play?

Behavioural economics (an alternate approach to micro)

“Can individual choices over time be rationalised by standard exponential discounting, or do they follow another model, such as time inconsistent preferences and hyperbolic discounting?”

Applied fields

Development.

“Has the legacy of British institutions increased or decrease the level of GDP in former colonies?”

“Do greater unemployment benefits increase the length of an unemployment spell, and if so, to what extent?”

“Does public support for education increase or decrease income inequality?”

“Why did the industrial revolution first occur in Britain rather than in another country?”

“Are protectionist infant industry policies’ usually successful in fostering growth?”

International

“Do floating (rather than fixed) exchange rates lead to macroeconomic instability?”

Environmental

“What is the appropriate discount rate to use for considering costly measures to reduce carbon emissions?”

Industrial Organization

“Do firms innovate more or less when they have greater market power in an industry?”

“Do ‘single payer’ health care plans like the NHS provide basic health care services more or less efficiently then policies of mandated insurance and regulated exchanges like in the Netherlands?”

A more extensive definition and discussion of fields is in Appendix A of “Writing Economics”

Do you know?…

Which type of analysis typically uses the most ‘difficult, formal’ maths? 7

  • Microeconomic theory
  • Applied econometric analysis
  • Descriptive macroeconomics

Another use of data: ‘calibrating’ models aka ‘calibration exercises’; I will not discuss this at the moment. ↩

Sometimes this can be confusing, particularly when the data seems to represent the entire ‘population’ of interest, such as an industry’s price and sales data in a relevant period. Without getting into an extensive discussion of the meaning of probability and statistics, I will suggest that we can see this as a ‘sample of the prices and sales that could have ocurred in any possible universe, or over a period of many years’. Ouch, this gets thorny, and there are strong debates in the Statistics world about this stuff. ↩

Answer: 1. Microeconomic theory ↩

Browse Econ Literature

  • Working papers
  • Software components
  • Book chapters
  • JEL classification

More features

  • Subscribe to new research

RePEc Biblio

Author registration.

  • Economics Virtual Seminar Calendar NEW!

IDEAS home

Some searches may not work properly. We apologize for the inconvenience.

Empirical Economics

  • Publisher Info
  • Serial Info

Corrections

Contact information of springer, serial information, impact factors.

  • Simple ( last 10 years )
  • Recursive ( 10 )
  • Discounted ( 10 )
  • Recursive discounted ( 10 )
  • H-Index ( 10 )
  • Euclid ( 10 )
  • Aggregate ( 10 )
  • By citations
  • By downloads (last 12 months)

August 2024, Volume 67, Issue 2

July 2024, volume 67, issue 1, june 2024, volume 66, issue 6, may 2024, volume 66, issue 5, april 2024, volume 66, issue 4, march 2024, volume 66, issue 3, february 2024, volume 66, issue 2, january 2024, volume 66, issue 1, december 2023, volume 65, issue 6, november 2023, volume 65, issue 5, october 2023, volume 65, issue 4, september 2023, volume 65, issue 3, more services and features.

Follow serials, authors, keywords & more

Public profiles for Economics researchers

Various research rankings in Economics

RePEc Genealogy

Who was a student of whom, using RePEc

Curated articles & papers on economics topics

Upload your paper to be listed on RePEc and IDEAS

New papers by email

Subscribe to new additions to RePEc

EconAcademics

Blog aggregator for economics research

Cases of plagiarism in Economics

About RePEc

Initiative for open bibliographies in Economics

News about RePEc

Questions about IDEAS and RePEc

RePEc volunteers

Participating archives

Publishers indexing in RePEc

Privacy statement

Found an error or omission?

Opportunities to help RePEc

Get papers listed

Have your research listed on RePEc

Open a RePEc archive

Have your institution's/publisher's output listed on RePEc

Get RePEc data

Use data assembled by RePEc

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

empirical research economics

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

An empirical investigation into the effects of the digital economy on regional integration: evidence from urban agglomeration in china.

empirical research economics

1. Introduction

2. theoretical hypotheses and research design, 2.1. theoretical hypotheses, 2.1.1. the impact of the digital economy on the integration in beijing-tianjin-hebei, 2.1.2. differences in the impact of different content in the digital economy, 2.1.3. mechanisms through which the digital economy affects beijing-tianjin-hebei integration, 2.2. data sources and variable definitions, 2.2.1. dependent variable, 2.2.2. explanatory variable, 2.2.3. control variables, 2.2.4. mechanism variables, 2.3. econometric model, 3. empirical regression analysis, 3.1. annual trend analysis, 3.2. baseline regression, 3.3. robustness checks, 3.3.1. replacing the dependent variable, 3.3.2. replacing the explanatory variable, 3.3.3. winsorizing the data, 3.4. endogeneity issues, 3.5. heterogeneity analysis, 3.6. threshold regression, 4. extended analysis, 4.1. heterogeneous effects of different aspects of the digital economy, 4.2. mechanism analysis, 4.3. heterogeneous mediating effects, 5. conclusions and policy implications, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • National Data Bureau. Digital China Development Report. 2023. Available online: https://www.digitalchina.gov.cn/2024/xwzx/szkx/202406/P020240630600725771219.pdf (accessed on 30 June 2024).
  • Li, Q.H.; He, A.P. Research on the impact, effect, and mechanism of the digital economy on the coordinated development of the regional economy. Explor. Econ. Issues 2022 , 8 , 1–13. [ Google Scholar ]
  • Liu, J.; Song, N. Coordinated development of digital economy, industrial structure, and regional economy. Dev. Res. 2024 , 2 , 42–55. [ Google Scholar ]
  • National Bureau of Statistics. Statistical Classification of Digital Economy and Its Core Industries. 2021. Available online: https://www.gov.cn/gongbao/content/2021/content_5625996.htm (accessed on 30 June 2021).
  • Zhang, K.Y.; Zhuang, Z.W. Digital economy: The starting point for the coordinated development of Beijing, Tianjin, and Hebei and the development direction of Xiongan New Area. Explor. Financ. Theory 2023 , 6 , 10–19. [ Google Scholar ]
  • Cheng, F.F.; Sun, Y.Y. Digital industry development and regional economic gap: Theoretical logic and empirical test. Econ. Syst. Reform 2023 , 4 , 61–69. [ Google Scholar ]
  • Yao, C.C.; Song, D.L. Digital economy and reshaping industrial spatial layout: Equilibrium or polarization. Financ. Trade Econ. 2023 , 44 , 69–87. [ Google Scholar ]
  • Sun, J.Y.; Bai, J.H.; Wang, Y. How can the digital economy reshape my country’s regional innovation landscape?—Perspective based on the flow of R & D factors. Stat. Res. 2023 , 40 , 59–70. [ Google Scholar ]
  • Wang, Z.G.; Li, X.M.; Hu, N.N. Research on the impact of the digital economy on coordinated regional development in my country—An analysis based on the perspective of convergence of economic growth. Urban Issues 2024 , 1 , 75–83. [ Google Scholar ]
  • Qiu, Z.Q.; Zhang, S.Q.; Liu, S.D.; Xu, Y.K. From the digital divide to dividend differences—The perspective of Internet capital. Chin. Soc. Sci. 2016 , 10 , 93–115+203–204. [ Google Scholar ]
  • Duan, B.; Shao, C.L.; Duan, B. Has the digital economy exacerbated regional disparities?—Empirical evidence from 284 prefecture-level cities in China. World Geogr. Res. 2020 , 29 , 728–737. [ Google Scholar ]
  • Zhao, T.; Zhang, Z.; Liang, S.K. Digital economy, entrepreneurial activity and high-quality development-Empirical evidence from Chinese cities. Manag. World 2020 , 36 , 65–76. [ Google Scholar ]
  • Zhou, X.H.; Liu, Y.Y.; Peng, L.Y. Digital economic development and green total factor productivity improvement. Shanghai Econ. Res. 2021 , 12 , 51–63. [ Google Scholar ]
  • Zhou, Z.H.; Guo, J.T. Analysis of the rational way for digital economic development to promote common prosperity. Shanghai Econ. Res. 2022 , 6 , 5–16. [ Google Scholar ]
  • Jiang, P.; Yang, Y.; Ye, W.; Liu, L.; Gu, X.; Chen, H.; Zhang, Y. Study on the Efficiency, Evolutionary Trend, and Influencing Factors of Rural-Urban Integration Development in Sichuan and Chongqing Regions under the Background of Dual Carbon. Land 2024 , 13 , 696. [ Google Scholar ] [ CrossRef ]
  • Sheng, Z.; Zhu, C.; Chen, M. Exploring the Impact of the Digital Economy on Green Total Factor Productivity—Evidence from Chinese Cities. Sustainability 2024 , 16 , 2734. [ Google Scholar ] [ CrossRef ]
  • OECD. Measuring the Digital Economy: A New Perspective ; OECD Publishing: Paris, France, 2014. [ Google Scholar ] [ CrossRef ]
  • Barefoot, K.; Dave, C.; William, J.; Jessica, R.; Nicholson, R.O. Defining and Measuring the Digital Economy. Available online: http://www.Bea.gov/system/files/papers/WP2018-4.pdf (accessed on 15 April 2018).
  • Chen, X.D.; Yang, X.X. The impact of digital economic development on industrial structure upgrading—A study based on grey relational entropy and dissipative structure theory. Reform 2021 , 3 , 26–39. [ Google Scholar ]
  • Guo, F.; Xiong, Y.J.; Shi, Q.L.; Wang, J.Y. A re-examination of the digital economy and economic development in administrative border areas- Evidence from satellite lighting data. Manag. World 2023 , 39 , 16–33. [ Google Scholar ]
  • Liu, X.Y.; Wang, Y.X. Digital infrastructure and spatial reconstruction of Chinese cities. Econ. Geogr. 2024 , 44 , 55–63. [ Google Scholar ]
  • Tang, Y.Y.; Li, F.Z. Digital infrastructure policies empower enterprises to digital transformation: Evolutionary logic and policy orientation. Qiushi Acad. J. 2024 , 51 , 59–68. [ Google Scholar ]
  • Yilmaz, S.; Haynes, K.E.; Dinc, M. Geographic, and Network Neighbors: Spillover Effects of Telecommunications Infrastructure. J. Reg. Sci. 2002 , 42 , 339–360. [ Google Scholar ] [ CrossRef ]
  • Huang, Q.H.; Yu, Y.Z.; Zhang, S.L. Internet development and manufacturing productivity improvement: Intrinsic mechanisms and China’s experience. China’s Ind. Econ. 2019 , 8 , 5–23. [ Google Scholar ]
  • Ma, F.C.; Xiong, S.Y.; Sun, Y.J.; Wang, W.H. The impact of data classification and hierarchical confirmation on the realization of data element value. J. Inf. Resour. Manag. 2024 , 14 , 4–12. [ Google Scholar ]
  • Wang, H.; Gu, L.; Hong, M. The Development Status of the Manufacturing Industry and the Impact of Digital Characteristics from the Perspective of Innovation. Sustainability 2024 , 16 , 1009. [ Google Scholar ] [ CrossRef ]
  • Zhao, L.; Huang, Y.X.; Wang, F.Y. The impact of industrial digitalization and industrial structure upgrading on common prosperity—Based on the perspective of the urban-rural income gap. Stat. Decis. Mak. 2024 , 40 , 94–98. [ Google Scholar ]
  • Ai, Y.; Song, P.; Li, L. Research on the structural transformation effects of digital industrialization-Theoretical model and empirical test. Econ. Manag. Res. 2023 , 44 , 3–23. [ Google Scholar ]
  • Wang, F.Y.; Wan, Z.X. The impact of digital industrialization and industrial structure upgrading on common prosperity—An empirical analysis based on data from 2011 to 2022 from the Yangtze River Economic Belt. J. Yangtze Univ. 2024 , 47 , 59–67. [ Google Scholar ]
  • Yang, Q.Y.; Xi, W.Q. Digital industrialization development, industrial structure upgrading and regional tax revenue. Public Financ. Res. 2024 , 1 , 34–45. [ Google Scholar ]
  • Luo, Q.L. How can digital literacy help workers increase their work income?—A perspective based on human capital and social networks. J. Jingchu Inst. Technol. 2024 , 39 , 64–72. [ Google Scholar ]
  • Liu, X.F. The impact of digital literacy on household consumption levels in my country- Consider the mediating effect of social networks and income levels. Bus. Econ. Res. 2024 , 9 , 48–52. [ Google Scholar ]
  • Jing, W.J.; Sun, B.W. Digital economy promotes high-quality economic development: A theoretical analysis framework. J. Econ. 2019 , 2 , 66–73. [ Google Scholar ]
  • Li, Q.H.; Wu, Z. Capital account openness and resource allocation efficiency: Empirical evidence from transnational samples. China’s Ind. Econ. 2022 , 8 , 82–98. [ Google Scholar ] [ CrossRef ]
  • Ye, T.L.; Wang, X.Y. The impact of the digital economy on coordinated and balanced development, as well as on the path to achieving common prosperity, is also important. Econ. Trends 2023 , 1 , 73–88. [ Google Scholar ]
  • Guan, H.J.; Li, X.; Tan, Y. Education investment, transaction costs, and regional income gap. Financ. Res. 2019 , 45 , 97–111. [ Google Scholar ]
  • Goldfarb, A.; Tucker, C. Digital Economics. J. Econ. Lit. 2019 , 57 , 3–43. [ Google Scholar ] [ CrossRef ]
  • Ark, B.V. Productivity and Digitalization in Europe: Paving the Road to Faster Growth. Digi-World Econ. J. 2015 , 4 , 107. [ Google Scholar ]
  • Huang, Y.C.; Gong, S.J.; Zou, C.; Jia, L.; Xu, Z. Digital economy, factor allocation efficiency, and urban-rural integrated development. China Population. Resour. Environ. 2022 , 32 , 11. [ Google Scholar ]
  • Wu, S.; Tang, J.; Li, M.; Xiao, J. Digital Economy, Binary Factor Mismatch and Sustainable Economic Development of Coastal Areas in China. Heliyon 2024 , 10 , e26453. [ Google Scholar ] [ CrossRef ]
  • Thompson, P.; Williams, R.; Thomas, B.C. Are UK SMEs with active websites more likely to achieve both innovation and growth. J. Small Bus. Enterp. Dev. 2013 , 20 , 934–965. [ Google Scholar ] [ CrossRef ]
  • O’Connor, G.C. The innovation navigator: Transforming your organization in the era of digital design and collaborative culture. Res.-Technol. Manag. 2020 , 63 , 141–162. [ Google Scholar ]
  • Li, J.; Sun, Z.; Zhou, J.; Sow, Y.; Cui, X.; Chen, H.; Shen, Q. The Impact of the Digital Economy on Carbon Emissions from Cultivated Land Use. Land 2023 , 12 , 665. [ Google Scholar ] [ CrossRef ]
  • Jiang, M.; Yang, S.; Zhou, G. Study on the Coupling Coordination Development between the Digital Economy and Innovation Efficiency: Evidence from the Urban Agglomeration in the Middle Reaches of the Yangtze River. Land 2024 , 13 , 292. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Zhang, J. Mechanisms and effects of regional economic integration—Explanation of spatial development based on institutional distance. Econ. Geogr. 2022 , 42 , 28–36. [ Google Scholar ] [ CrossRef ]
  • Gui, Z. Research on comprehensive measurement of regional integration in central Yunnan urban agglomeration spatial governance for high-quality development. In Proceedings of the 2020 China Urban Planning Annual Conference (14 Regional Planning and Urban Economy), Chengdu, China, 25–30 September 2021; pp. 1174–1183. [ Google Scholar ]
  • Fei, J.; Xu, C. Research on the impact of digital economy on regional integration and its mechanism—Based on panel data of 26 prefecture-level cities in the Yangtze River Delta. World Geogr. Res. 2024 , 33 , 128–138. [ Google Scholar ]
  • Needham, P.; Duhem, P.M.M. Commentary on the Principles of Thermodynamics ; Springer: Brelin/Heidelberg, Germany, 2011; Volume 277, p. 121e240. [ Google Scholar ]
  • Liang, Q.; Xiao, S.P.; Li, M.X. Digital Economy Development, Spatial Spillover and Innovation Quality Growth—The Threshold Effect Test of Market Efficiency. Shanghai J. Econ. 2021 , 9 , 44–56. [ Google Scholar ]
  • Qiao, B.; Zhang, R.; Zhang, B. Institutional transaction costs, industrial concentration, and regional total factor productivity. Nanjing Soc. Sci. 2018 , 12 , 41–49+65. [ Google Scholar ]
  • Wen, Z.L.; Ye, B.J. Mediation effect analysis: Development of methods and models. Adv. Psychol. Sci. 2014 , 22 , 731–745. [ Google Scholar ] [ CrossRef ]
  • Sun, H.; Qiao, B. Issues and suggestions for the coordinated development of industries in Beijing, Tianjin, and Hebei. China Soft Sci. 2015 , 7 , 68–74. [ Google Scholar ]
  • Yuan, X.; Wang, S.B.; Huang, T. Measurement, Source Decomposition, and Formation Mechanism of Differences in the Development Quality of Urban Agglomerations in China. Inq. Into Econ. Issues 2024 , 2 , 142–159. [ Google Scholar ]
  • Rui, M.J.; Ma, H.; Han, Z.R. Research on the formation mechanism of excessive industrial agglomeration—Taking the steel industry in Hebei Province as an example. Econ. Manag. Res. 2017 , 38 , 94–104. [ Google Scholar ]
  • Lüthi, S.; Alain, T.; Viktor, G. Intra-Firm and Extra-Firm Linkages in the Knowledge Economy: The Case of the Emerging Mega-City Region of Munich. Glob. Netw. 2010 , 10 , 114–137. [ Google Scholar ] [ CrossRef ]
  • Nunn, N.; Qian, N.U.S. Food Aid and Civil Conflict. Am. Econ. Rev. 2014 , 104 , 1630–1666. [ Google Scholar ] [ CrossRef ]
  • Ye, T.L.; Liu, J. Comparative study on the coordinated development of industries in the Beijing-Tianjin-Hebei region and the Pearl River Delta. Hebei Acad. J. 2024 , 44 , 160–167. [ Google Scholar ]
  • Chen, J.; Hu, C.; Luo, Y. Regional Differences and Spatial-Temporal Evolution Characteristics of Digital Economy Development in the Yangtze River Economic Belt. Sustainability 2024 , 16 , 4188. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

DimensionMeasurement IndicatorsVariableUnit
Market
Integration
The Flow of GoodsTotal Freight Volume10,000 tons
Economic Development LevelPer Capita GDPYuan
Trade DependenceTotal Import and ExportUSD 100 million
Industrial StructureProportion of Secondary and
Tertiary Industries
%
Spatial
Integration
Highway Network ConstructionTotal Highway Lengthkm
Information FlowTotal Postal and
Telecommunications Services
100 million yuan
Population FlowTotal Passenger Traffic10,000 people
Social IntegrationEcological SustainabilityGeneral Public Expenditure100 million yuan
Public EducationEducation Fiscal Expenditure100 million yuan
The Flow of GoodsTotal Freight Volume10,000 tons
DimensionVariableUnit
Digital
Infrastructure
Number of Mobile Phone Users10,000 households
Number of Internet Broadband Access Users10,000 households
Long-distance Optical Cable Line Density/
Digital
Industrialization
Employment in Information Transmission, Computer
Services and Software Industry
10,000 people
Total Telecommunications Services100 million yuan
Industrial
Digitalization
Smart Industrial ParksUnits
Digital Financial Inclusion Index/
Digital SocietyNumber of Internet Users per 100 PeopleHouseholds
Digitalization Word FrequencyUnits
Variable SymbolVariable MeaningNMeanStandard MinMax
regionalIntegration Level1690.2410.1670.0270.751
digitalDigital Economy1690.2770.1440.0550.778
depositFinancial Development1691.4853.290.0919.21
primaryBasic Education Level1695.7182.6741.66111.5
industrialIndustrial Level1692.3842.010.2928.326
outForeign Capital Dependence1692.4184.8230.01630.83
patentTechnology Factor Allocation1694.56713.0740.00779.21
seniorHuman Capital Allocation1691.8811.90.086.323
assetFixed Capital Allocation1693.3952.9120.36113.05
asset1Floating Capital Allocation1693.2934.5620.07325.1
(1)(2)
VariableRegionalRegional
digital0.386 ***0.171 ***
(0.024)(0.029)
deposit 0.044
(0.030)
primary 0.267 ***
(0.030)
industrial −0.085 ***
(0.024)
out 0.051 **
(0.020)
Constant0.134 ***0.099 ***
(0.007)(0.009)
Observations169169
Number of id1313
R-squared0.6210.780
(1)(2)(3)(4)(5)
Replacing the
Dependent Variable
Replacing the
Explanatory Variable
Winsorizing the Data
VariableRegionalRegionalRegionalRegionalRegional
digital0.227 ***0.170 ***0.197 ***0.171 ***0.164 ***
(0.031)(0.028)(0.027)(0.029)(0.034)
Constant0.066 ***0.097 ***0.113 ***0.098 ***0.106 ***
(0.020)(0.009)(0.009)(0.009)(0.010)
ControlYESYESYESYESYES
Observations169169169169169
Number of id1313131313
R-squared0.80980.7850.8000.7790.668
(1)(2)
VariableRegionalRegional
L.regional 1.024 ***
(0.123)
digital0.582 ***0.100 *
(0.153)(0.048)
ControlYESYES
Constant0.582 ***−0.017 *
Observations(0.153)(0.008)
169156
Number of id1313
R-squared0.482/
P_Hansen/0.687
Large CitiesSmall-Medium Cities
(1)(2)(3)(4)
VariableRegionalRegionalRegionalRegional
digital0.610 ***0.150 *0.555 ***0.610 ***
(0.046)(0.075)(0.035)(0.046)
deposit −0.010
(0.046)
primary 0.324 ***
(0.049)
industrial −0.079 **
(0.038)
out 0.059 **
(0.028)
Constant0.115 **0.150 ***0.103 ***0.115 **
(0.049)(0.024)(0.037)(0.019)
Observations6565104104
Number of id5588
R-squared0.6910.869 0.3340.865
Variable(1)(2)
Threshold10,556,8210.3270
(0.0067)(0.0167)
P_thresh0.01670.0067
0.146 ***
(0.039)
0.217 ***
(0.048)
0.220 ***
(0.050)
0.151 ***
(0.032)
ControlYESYES
Constant0.124 ***0.108 ***
(0.018)(0.014)
Observations169169
Number of id1313
R-squared0.8070.806
(1)(2)(3)(4)
VariableRegionalRegionalRegionalRegional
base-dig0.046 **
(0.020)
dig-indu 0.240 ***
(0.036)
indu-dig 0.128 ***
(0.023)
soc-dig 0.064 ***
(0.018)
Constant0.093 ***0.082 ***0.110 ***0.104 ***
(0.010)(0.021)(0.022)(0.022)
ModelFEFEFEFE
ControlYESYESYESYES
Observations169169169169
Number of id13131313
R-squared0.7380.79520.76910.7450
Large CitiesSmall-Medium Cities
(1)(2)(3)(4)(5)(6)(7)(8)
VariableRegionalRegionalRegionalRegionalRegionalRegionalRegionalRegional
base-dig0.012 0.037
(0.036) (0.024)
dig-indu 0.347 *** 0.056 **
(0.084) (0.023)
indu-dig 0.276 *** 0.113 ***
(0.059) (0.026)
soc-dig 0.055 0.079 ***
(0.047) (0.018)
ControlYESYESYESYESYESYESYESYES
Constant0.170 ***0.143 ***0.145 ***0.173 ***0.125 ***0.115 ***0.122 ***0.119 ***
(0.026)(0.020)(0.019)(0.021)(0.016)(0.018)(0.016)(0.017)
Observations65656565104104104104
Number of id55558888
R-squared0.8600.8930.9000.8630.8710.8690.9220.923
(1)(2)(3)(4)(5)
VariableMarketSeniorAssetAsset1Patent
digital−1.098 ***0.339 ***0.411 ***0.056 *−0.060 *
(0.108)(0.121)(0.067)(0.032)(0.031)
deposit0.386 ***−0.033−0.406 ***0.654 ***0.580 ***
(0.113)(0.126)(0.070)(0.033)(0.032)
primary−0.834 ***0.1290.815 ***0.162 ***−0.061 *
(0.115)(0.128)(0.071)(0.033)(0.033)
industrial−0.0630.104−0.439 ***−0.051 *0.092 ***
(0.090)(0.100)(0.055)(0.026)(0.026)
out−0.123−0.0690.350 ***0.128 ***−0.120 ***
(0.076)(0.085)(0.047)(0.022)(0.022)
Constant1.011 ***0.123 ***−0.094 ***0.0020.043 ***
(0.034)(0.038)(0.021)(0.010)(0.010)
Observations169169169169169
Number of id1313131313
R-squared0.7860.1760.7790.8920.724
VariableMarketUniversityCapitalCapital1Patent
Digital−0.930 ***−0.1250.0180.631 ***0.568 ***
(0.209)(0.096)(0.094)(0.222)(0.178)
Constant1.175 ***0.073 **0.078 **−0.1060.181 ***
(0.067)(0.031)(0.030)(0.071)(0.057)
ControlYESYESYESYESYES
Observations65656565104
Number of id55558
R-squared0.9030.7350.9000.8950.819
Digital−0.565 ***−0.262 **0.0650.412 ***0.166 ***
(0.105)(0.126)(0.050)(0.072)(0.060)
Constant0.729 ***0.427 ***−0.0300.108 ***−0.195 ***
(0.054)(0.064)(0.026)(0.037)(0.031)
ControlYESYESYESYESYES
Observations104104104104104
Number of id88888
R-squared0.8040.5650.9160.9060.939
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Ru, L.; Wang, P.; Lu, Y. An Empirical Investigation into the Effects of the Digital Economy on Regional Integration: Evidence from Urban Agglomeration in China. Sustainability 2024 , 16 , 7760. https://doi.org/10.3390/su16177760

Ru L, Wang P, Lu Y. An Empirical Investigation into the Effects of the Digital Economy on Regional Integration: Evidence from Urban Agglomeration in China. Sustainability . 2024; 16(17):7760. https://doi.org/10.3390/su16177760

Ru, Lifei, Peilin Wang, and Yixian Lu. 2024. "An Empirical Investigation into the Effects of the Digital Economy on Regional Integration: Evidence from Urban Agglomeration in China" Sustainability 16, no. 17: 7760. https://doi.org/10.3390/su16177760

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Empirical Research

  • First Online: 08 May 2024

Cite this chapter

empirical research economics

  • Claes Wohlin 7 ,
  • Per Runeson 8 ,
  • Martin Höst 9 ,
  • Magnus C. Ohlsson 10 ,
  • Björn Regnell 8 &
  • Anders Wesslén 11  

This chapter presents a decision-making structure for determining an appropriate research design for a specific study. A selection of research approaches is introduced to help illustrate the decision-making structure. The research approaches are described briefly to provide a basic understanding of different options. Moreover, the chapter discusses how different research approaches may be used in a research project or when, for example, pursuing PhD studies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

The term “investigation” is used as a more general term than a specific study.

It is sometimes also referred to as a review or a code review, if reviewing code. However, we have chosen to use the term “inspection” to avoid mixing it up with a systematic literature review.

Latin for “in the glass” and refers to chemical experiments in a test tube.

Latin for “in life” and refers to experiments in a real environment.

Ali, N.B., Petersen, K., Wohlin, C.: A systematic literature review on the industrial use of software process simulation. J. Syst. Software 97 , 65–85 (2014). https://doi.org/10.1016/j.jss.2014.06.059

Article   Google Scholar  

Anastas, J.W.: Research Design for the Social Work and the Human Services, 2nd edn. Columbia University Press, New York (2000). https://doi.org/10.7312/anas11890

Anthony, R.N.: Planning and Control Systems: A Framework for Analysis. Harvard University Graduate School of Business Administration, Boston (1965)

Google Scholar  

Arisholm, E., Gallis, H., Dybå, T., Sjøberg, D.I.K.: Evaluating pair programming with respect to system complexity and programmer expertise. IEEE Trans. Software Eng. 33 (2), 65–86 (2007). https://doi.org/10.1109/TSE.2007.17

Ayala, C., Turhan, B., Franch, X., Juristo, N.: Use and misuse of the term ‘experiment’ in mining software repositories research. IEEE Trans. Software Eng. 48 (11), 4229–4248 (2022). https://doi.org/10.1109/TSE.2021.3113558

Babbie, E.R.: Survey Research Methods. Wadsworth, Belmont (1998)

Baskerville, R.: What design science is not. Eur. J. Inf. Syst. 17 (5), 441–443 (2008). https://doi.org/10.1057/ejis.2008.45

Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3 (2), 77–101 (2006). https://doi.org/10.1191/1478088706qp063oa

Collis, J., Hussey, R.: Business Research: A Practical Guide for Undergraduate and Postgraduate Students. Bloomsbury Publishing, London (2021)

Creswell, J.W., Creswell, J.D.: Research Design: Qualitative, Quantitative, and Mixed methods Approaches. Sage publications, Thousand Oaks (2017)

Drisko, J.W., Maschi, T.: Content Analysis. Oxford University Press, Oxford (2016)

Easterbrook, S., Singer, J., Storey, M.A., Damian, D.: Selecting empirical methods for software engineering research. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds.) Guide to Advanced Empirical Software Engineering. Springer, London (2008). https://doi.org/10.1007/978-1-84800-044-5_11

Emerson, R.M., Fretz, R.I., Shaw, L.L.: Participant observation and fieldnotes. In: Stanley, L. (ed.) Handbook of Ethnography, pp. 352–368. SAGE Publications Ltd., Thousand Oaks (2001)

Chapter   Google Scholar  

Engström, E., Storey, M.A., Runeson, P., Höst, M., Baldassarre, M.T.: How software engineering research aligns with design science: a review. Empirical Software Eng. 25 , 2630–2660 (2020). https://doi.org/10.1007/s10664-020-09818-7

Felderer, M., Travassos, G.H.: Contemporary Empirical Methods in Software Engineering. Springer, Cham (2020)

Book   Google Scholar  

Glaser, B., Strauss, A.: Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge, Milton Park (2017). https://doi.org/10.4324/9780203793206

Hannay, J.E., Dybå, T., Arisholm, E., Sjøberg, D.I.K.: The effectiveness of pair programming: a meta-analysis. Inf. Software Technol. 51 (7), 1110–1122 (2009). https://doi.org/10.1016/j.infsof.2009.02.001

Harwood, T.G., Garry, T.: An overview of content analysis. Mark. Rev. 3 (4), 479–498 (2003). https://doi.org/10.1362/146934703771910080

Hevner, A.R., March, S.T., Park, J., Ram, S.: Design science in information systems research. MIS Q. 28 (1), 75–105 (2004). https://doi.org/10.2307/25148625

Hoda, R.: Socio-technical grounded theory for software engineering. IEEE Trans. Software Eng. 48 (10), 3808–3832 (2022). https://doi.org/10.1109/TSE.2021.3106280

Jiménez, M., Piattini, M.: Problems and solutions in distributed software development: a systematic review. In: Proceedings of the Software Engineering Approaches for Offshore and Outsourced Development, pp. 107–125 (2009). https://doi.org/10.1007/978-3-642-01856-5_8

Karahasanović, A., Anda, B., Arisholm, E., Hove, S.E., Jørgensen, M., Sjøberg, D.I.K., Welland, R.: Collecting feedback during software engineering experiments. Empirical Software Eng. 10 (2), 113–147 (2005). https://doi.org/10.1007/s10664-004-6189-4

Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (version 2.3). Techncial Report. EBSE Technical Report EBSE-2007-01, Keele University and Durham University (2007)

Kitchenham, B.A., Pfleeger, S.L.: Principles of survey research part 2: designing a survey. ACM SIGSOFT Software Eng. Notes 27 (1), 18–20 (2002). https://doi.org/10.1145/566493.566495

Kitchenham, B., Pickard, L., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Software, 52–62 (1995). https://doi.org/10.1109/52.391832

Kitchenham, B.A., Budgen, D., Brereton, P.: Evidence-Based Software Engineering and Systematic Reviews, vol. 4. CRC Press, Boca Raton (2015)

Kitchenham, B., Madeyski, L., Budgen, D.: SEGRESS: software engineering guidelines for reporting secondary studies. IEEE Trans. Software Eng. 49 (3), 1273–1298 (2023). https://doi.org/10.1109/TSE.2022.3174092

Klein, H.K., Myers, M.D.: A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Q., 67–93 (1999). https://doi.org/10.2307/249410

Kontio, J., Bragge, J., Lehtola, L.: The focus group method as an empirical tool in software engineering. In: Guide to Advanced Empirical Software Engineering, pp. 93–116. Springer, London (2008). https://doi.org/10.1007/978-1-84800-044-5_4

Krishnaiah, V., Narsimha, G., Chandra, N.S.: Survey of classification techniques in data mining. Int. J. Comput. Sci. Eng. 2 (9), 65–74 (2014)

Law, A.M., Kelton, W.D.: Simulation Modeling and Analysis, vol. 3. Mcgraw-Hill, New York (2007)

Linkman, S., Rombach, H.D.: Experimentation as a vehicle for software technology transfer – a family of software reading techniques. Inf. Software Technol. 39 (11), 777–780 (1997). https://doi.org/10.1016/S0950-5849(97)00029-3

Moe, N.B., Aurum, A., Dybå, T.: Challenges of shared decision-making: a multiple case study of agile software development. Inf. Software Technol. 54 (8), 853–865 (2012). https://doi.org/10.1016/j.infsof.2011.11.006

Müller, M., Pfahl, D.: Simulation methods. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds.) Guide to Advanced Empirical Software Engineering, pp. 117–152. Springer, London (2008). https://doi.org/10.1007/978-1-84800-044-5_5

Petersen, K., Wohlin, C.: Context in industrial software engineering research. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, pp. 401–404 (2009). https://doi.org/10.1109/ESEM.2009.5316010

Pfleeger, S.L.: Experimental design and analysis in software engineering. Ann. Software Eng. 1 , 219–253 (1995)

Pinsonneault, A., Kraemer, K.: Survey research methodology in management information systems: an assessment. J. Manage. Inf. Syst., 75–105 (1993). https://doi.org/10.1080/07421222.1993.11518001

Rainer, A.: The longitudinal, chronological case study research strategy: a definition, and an example from IBM Hursley Park. Inf. Software Technol. 53 (7), 730–746 (2011). https://doi.org/10.1016/j.infsof.2011.01.003

Robson, C.: Small-Scale Evaluation: Principles and Practice. Sage Publications Ltd., Thousand Oaks (2017)

Robson, C., McCartan, K.: Real World Research: A Resource for Users of Social Research Methods in Applied Settings, 4th edn. Wiley, Hoboken (2016)

Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Software Eng. 14 , 131–164 (2009). https://doi.org/10.1007/s10664-008-9102-8

Runeson, P., Skoglund, M.: Reference-based search strategies in systematic reviews. In: Proceedings of the International Conference on Empirical Assessment & Evaluation in Software Engineering (2009). https://doi.org/10.14236/ewic/EASE2009.4

Runeson, P., Höst, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering. Guidelines and Examples. John Wiley & Sons, Hoboken (2012)

Runeson, P., Engström, E., Storey, M.A.: The design science paradigm as a frame for empirical software engineering. In: Felderer, M., Travassos, G.H. (eds.) Contemporary Empirical Methods in Software Engineering, pp. 127–147. Springer, Berlin (2020). https://doi.org/10.1007/978-3-030-32489-6_5

Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Software Eng. 25 (4), 557–572 (1999). https://doi.org/10.1109/32.799955

Shull, F.J., Carver, J.C., Vegas, S., Juristo, N.: The role of replications in empirical software engineering. Empirical Software Eng. 13 , 211–218 (2008). https://doi.org/10.1007/s10664-008-9060-1

Sjøberg, D.I.K., Dybå, T., Jørgensen, M.: The future of empirical methods in software engineering research. In: Proceedings of the Future of Software Engineering, pp. 358–378 (2007). https://doi.org/10.1109/FOSE.2007.30

Staron, M.: Action Research in Software Engineering. Springer International Publishing, Berlin (2020)

Staron, M., Kuzniarz, L., Wohlin, C.: Empirical assessment of using stereotypes to improve comprehension of UML models: a set of experiments. J. Syst. Software 79 (5), 727–742 (2006). https://doi.org/10.1016/j.jss.2005.09.014

Stol, K.J., Ralph, P., Fitzgerald, B.: Grounded theory in software engineering research: a critical review and guidelines. In: Proceedings of the International Conference on Software Engineering, pp. 120–131 (2016). https://doi.org/10.1145/2884781.2884833

Usman, M., bin Ali, N., Wohlin, C.: A quality assessment instrument for systematic literature reviews in software engineering. e-Inform. Software Eng. J. 17 (1), 230105 (2023). https://doi.org/10.37190/e-Inf230105

Verner, J.M., Sampson, J., Tosic, V., Abu Bakar, N.A., Kitchenham, B.A.: Guidelines for industrially-based multiple case studies in software engineering. In: Proceedings of the International Conference on Research Challenges in Information Science, pp. 313–324 (2009). https://doi.org/10.1109/RCIS.2009.5089295

Wieringa, R.J.: Design Science Methodology for Information Systems and Software Engineering. Springer, Berlin (2014). https://doi.org/10.1007/978-3-662-43839-8

Williams, L.A., Kessler, R.R.: Experiments with industry’s “pair-programming” model in the computer science classroom. Comput. Sci. Educ. 11 (1), 7–20 (2001). https://doi.org/10.1076/csed.11.1.7.3846

Wohlin, C.: Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (2014). https://doi.org/10.1145/2601248.2601268

Wohlin, C.: Case study research in software engineering—it is a case, and it is a study, but is it a case study? Inf. Software Technol. 133 , 106514 (2021). https://doi.org/10.1016/j.infsof.2021.106514

Wohlin, C., Aurum, A.: Towards a decision-making structure for selecting a research design in empirical software engineering. Empirical Software Eng. 20 (6), 1427–1455 (2015). https://doi.org/10.1007/s10664-014-9319-7

Wohlin, C., Rainer, A.: Is it a case study?–a critical analysis and guidance. J. Syst. Software 192 , 111395 (2022). https://doi.org/10.1016/j.jss.2022.111395

Wohlin, C., Runeson, P.: Guiding the selection of research methodology in industry–academia collaboration in software engineering. Inf. Software Technol. 140 , 106678 (2021). https://doi.org/10.1016/j.infsof.2021.106678

Wohlin, C., Gustavsson, A., Höst, M., Mattsson, C.: A framework for technology introduction in software organizations. In: Proceedings of the Conference on Software Process Improvement, pp. 167–176 (1996)

Zannier, C., Melnik, G., Maurer, F.: On the success of empirical studies in the international conference on software engineering. In: Proceedings of the International Conference on Software Engineering, pp. 341–350 (2006). https://doi.org/10.1145/1134285.1134333

Download references

Author information

Authors and affiliations.

Blekinge Institute of Technology, Karlskrona, Sweden

Claes Wohlin

Department of Computer Science, Lund University, Lund, Sweden

Per Runeson & Björn Regnell

Faculty of Technology and Society, Malmö University, Malmö, Sweden

Martin Höst

System Verification Sweden AB, Malmö, Sweden

Magnus C. Ohlsson

Ericsson AB, Lund, Sweden

Anders Wesslén

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer-Verlag GmbH, DE, part of Springer Nature

About this chapter

Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A. (2024). Empirical Research. In: Experimentation in Software Engineering. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-69306-3_2

Download citation

DOI : https://doi.org/10.1007/978-3-662-69306-3_2

Published : 08 May 2024

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-662-69305-6

Online ISBN : 978-3-662-69306-3

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Empirical Research: Definition, Methods, Types and Examples

    empirical research economics

  2. Empirical Research: Definition, Methods, Types and Examples

    empirical research economics

  3. What Is Empirical Research? Definition, Types & Samples

    empirical research economics

  4. Definition, Types and Examples of Empirical Research

    empirical research economics

  5. What is empirical research

    empirical research economics

  6. Empirical Research: Definition and Examples

    empirical research economics

VIDEO

  1. How to do empirical research in Economics: 2

  2. Empirical Research with example

  3. PhD Topics in Economics

  4. Economics of Patents

  5. 05 2015 How to do empirical research in Economics

  6. Is Inflation the Product of Greed?

COMMENTS

  1. Home

    Empirical Economics

  2. Methods Used in Economic Research: An Empirical Study of Trends and Levels

    The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an ...

  3. PDF Empirical Strategies in Economics: Illuminating the Path From Cause to

    Illuminating the Path from Cause to Effect Joshua Angrist

  4. Next-Generation of Empirical Research in Economics

    Leading Japanese economists passionately discuss the frontiers of empirical research in economics and the future of it in this book. The book explores the impact that recent econometrics and empirical research has had on labor economics, development economics, international trade theory, behavioral economics, economic history, and macroeconomics.

  5. Empirical Strategies in Economics: Illuminating the Path From Cause to

    The view that empirical strategies in economics should be transparent and credible now goes almost without saying. By revealing for whom particular instrumental variables (IV) estimates are valid, the local average treatment effects (LATE) framework helped make this so.

  6. Articles

    Measuring and explaining efficiency of pre-vaccine country responses to COVID-19 pandemic: a conditional robust nonparametric approach. Arthur S. Kuchenbecker. Hudson S. Torrent. Flavio A. Ziegelmann. OriginalPaper 04 July 2024.

  7. The Credibility Revolution in Empirical Economics: How Better Research

    The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics by Joshua D. Angrist and Jörn-Steffen Pischke. Published in volume 24, issue 2, pages 3-30 of Journal of Economic Perspectives, Spring 2010, Abstract: Since Edward Leamer's memorable...

  8. PDF Method and Applications Experimental Economics

    Over the past two decades, experimental economics has moved from a fringe activity to become a standard tool for empirical research. With experimental economics now regarded as part of the basic tool-kit for applied economics, this book demonstrates how controlled experiments can be useful in providing evidence relevant to economic research.

  9. Empirical Strategies in Economics: Illuminating the Path from Cause to

    Joshua D. Angrist, 2022. "Empirical Strategies in Economics: Illuminating the Path From Cause to Effect," Econometrica, Econometric Society, vol. 90 (6), pages 2509-2539, November. citation courtesy of. Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating ...

  10. An empirical turn in economics research

    An empirical turn in economics research. A table of results in an issue of the American Economic Review. Over the past few decades, economists have increasingly been cited in the press and sought by Congress to give testimony on the issues of the day. This could be due in part to the increasingly empirical nature of economics research.

  11. (PDF) Methods Used in Economic Research: An Empirical Study of Trends

    The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are ...

  12. 3 Economics: Methods, approaches, fields and relevant questions

    6.1 (From theory to) empirical work; 6.2 Doing economic modelling and theory; 6.3 Economic theory and empirical research: writing about your work; 6.4 Empirical work: techniques and econometrics; 7 Data please! 7.1 Why do we use data? Descriptive; Causal: To make statistical inferences (and statistical predictions) about effects

  13. How to do empirical economics

    Downloadable! This article presents a discussion among leading economists on how to do empirical research in economics. The participants discuss their reasons for starting research projects, data base construction, the methods they use, the role of theory, and their views on the main alternative empirical approaches. The article ends with a discussion of a set of articles which exemplify best ...

  14. PDF Writing Economics A Guide for Harvard Economics Concentrators

    It usually builds on earlier short assignments, including a prospectus, in which you propose a question and detail how you will try to answer it. The research paper typically includes a discussion of relevant literature, an empirical component, a discussion of results, and perhaps a discussion of policy implications.

  15. Aims and scope

    Empirical Economics publishes high quality papers using econometric or statistical methods to fill the gap between economic theory and observed data. Papers explore such topics as estimation of established relationships between economic variables, testing of hypotheses derived from economic theory, treatment effect estimation, policy evaluation, simulation, forecasting, as well as econometric ...

  16. PDF How to get started on research in economics

    Don't be a perfectionist: Once you have started on a good question, a typical project in economics should yield a draft within six months. (But do the best you can) Don't procrastinate: Set realistic goals. Make sure you are working on SOMETHING all the time, even if it is a modest project. Giving a presentation in the work-in-progress ...

  17. Empirical Research in Economics

    Home Science Vol. 218, No. 4577 Empirical Research in Economics. Back To Vol. 218, No. 4577. Full access. Letter. Share on. Empirical Research in Economics. Jacob Cohen Authors Info & Affiliations. ... research, and educational use. Purchase this issue in print. Buy a single issue of Science for just $15 USD. Media Figures Multimedia. Tables ...

  18. Empirical Economics, Springer

    by Irfan Ahmad Shah & Srikanta Kundu. 279-299 Public expenditure multiplier across business cycle phases in an emerging economy: new empirical evidence and dimension. by Paras Sachdeva & Wasim Ahmad & N. R. Bhanumurthy. 301-325 Income inequality and fiscal policy over the political cycle.

  19. Empirical Literature on Economic Growth, 1991-2020: Uncovering Extant

    The factors required to achieve sustainable economic growth in a country are debated for decades, and empirical research in this regard continues to grow. Given the relevance of the topic and the absence of a comprehensive, systematic literature review, we used bibliometric techniques to examine and document several aspects in the empirical ...

  20. Empirical Law and Economics

    This article begins with a stylized history of empirical work in law and economics. It links the success of the empirical movement in law and economics with the so-called 'credibili­ ty revolution'. The hallmark of this revolution has been a focus on research designs that helped overcome some of the impediments to empirical work in law ...

  21. Empirical Economic and Financial Research

    Editors: Jan Beran, Yuanhua Feng, Hartmut Hebbel. The only book covering broad topics from empirical economic and financial research. Collects state-of-art contributions that will be of great interest to a diverse readership. The contributions are written in an easy-to-understand style. Includes supplementary material: sn.pub/extras.

  22. An Empirical Investigation into the Effects of the Digital Economy on

    Based on the urban panel data of Beijing-Tianjin-Hebei from 2009 to 2021, this article constructs an indicator system for the development level of the digital economy and regional integration, evaluates the impact of the digital economy on the integration development levels of different types of cities. The study found that (1) The digital economy significantly promotes the integration level ...

  23. Introduction

    In our call for papers to selective potential submitters in late 2021, we pointed out that Peter Schmidt stepped down as an Associate Editor of Empirical Economics after serving in that position for over 24 years. We noted his distinguished service at the Journal and that among his lifelong accomplishments in academics were important contributions to many areas of econometric research ...

  24. The Need for Localized, Socio-economic Policy Measures for Controlling

    The costly economic repercussions alone, however, are inadequate evidence to critique the national lockdown policy, as the lockdown was considered a necessity by health experts. A pertinent research question relates to the debate of whether a localized lockdown could have partially averted the adverse economic outcomes of a national lockdown.

  25. Empirical Research

    The overall objective of this chapter is to introduce empirical research. More specifically, the objectives are: (1) to introduce and discuss a decision-making structure for selecting an appropriate research approach, (2) to compare a selection of the introduced research methodologies and methods, and (3) to discuss how different research methodologies and research methods can be used in ...