This paper provides a simple unified analysis of optimal interval division problems. My primitive is a cell function that assigns a value to each subinterval (cell). Submodular cell functions conveniently imply the property of decreasing marginal returns. Also, for coarse decision problems, optimal cutoffs commonly increase as prior belief shifts upward. Its implications on language and efficient menus are discussed.

2859 1052
KK 1110
- Ph.D., University of Wisconsin-Madison
- B.A., Fudan University
Dr. Jianrong TIAN joined The University of Hong Kong as Assistant Professor in Economics in 2016. Before joining, he received his Ph.D. at the University of Wisconsin-Madison, and finished his undergraduate study at Fudan University.
Tian’s research interest is microeconomic theory, including information economics, game theory and mechanism design.
- Microeconomic Theory
- Information Economics
- Game Theory
- Mechanism Design
- J. Tian (2022): “Optimal Interval Division,” Economic Journal, 132(641), 424-435.
- Smith, L., P. Sørensen, and J. Tian (2021): “Informational Herding, Optimal Experimentation, and Contrarianism,” Review of Economic Studies, 88(5), 2527-2554.
In the standard herding model, privately informed individuals sequentially see prior actions and then act. An identical action herd eventually starts and public beliefs tend to “cascade sets” where social learning stops. What behaviour is socially efficient when actions ignore informational externalities? We characterize the outcome that maximizes the discounted sum of utilities. Our four key findings are: (a) Cascade sets shrink but do not vanish, and herding should occur but less readily as greater weight is attached to posterity. (b) An optimal mechanism rewards individuals mimicked by their successor. (c) Cascades cannot start after period one under a signal logconcavity condition. (d) Given this condition, efficient behaviour is contrarian, leaning against the myopically more popular actions in every period. We make two technical contributions: As value functions with learning are not smooth, we use monotone comparative statics under uncertainty to deduce optimal dynamic behaviour. We also adapt dynamic pivot mechanisms to Bayesian learning.