Skip to main content
Log in

Modeling learning and adaptation processes in activity-travel choice A framework and numerical experiment

  • Published:
Transportation Aims and scope Submit manuscript

Abstract

This paper develops a framework for modeling dynamic choice based on a theory of reinforcement learning and adaptation. According to this theory, individuals develop and continuously adapt choice rules while interacting with their environment. The proposed model framework specifies required components of learning systems including a reward function, incremental action value functions, and action selection methods. Furthermore, the system incorporates an incremental induction method that identifies relevant states based on reward distributions received in the past. The system assumes multi-stage decision making in potentially very large condition spaces and can deal with stochastic, non-stationary, and discontinuous reward functions. A hypothetical case is considered that combines route, destination, and mode choice for an activity under time-varying conditions of the activity schedule and road congestion probabilities. As it turns out, the system is quite robust for parameter settings and has good face validity. We therefore argue that it provides a useful and comprehensive framework for modeling learning and adaptation in the area of activity-travel choice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Arentze TA & Timmermans HJP (2002) An inductive learning approach to evolutionary decision processes in activity scheduling behavior: Theory and numerical experiments. Transportation Research Record.

  • Axhausen K Dimitrakopoulou E & Dimitripoulos I (1995) Adapting to change: Some evidence from a simple learning model. Proceedings PTRC P392: 191-203.

    Google Scholar 

  • Ben-Akiva ME, De Palma A & Kaysi I (1991) Dynamic network models and driver information systems. Transportation Research 25A: 251-266.

    Google Scholar 

  • Breiman L, Friedman JH Olshen RA & Stone CJ (1984) Classification and Regression Trees. Belmont, CA: Wadsworth.

  • Emmerink RHM (1996) Information and Pricing in Road Transport. PhD dissertation, Tinbergen Institute Research Series, Vrije Universiteit, Amsterdam.

    Google Scholar 

  • Fujii S & Kitamura R (2000) Anticipated travel time, information acquisition and actual experience: The case of Hanshin Expressway Route Closure, Paper presented at the 79th Annual Meeting of the Transportation Research Board, Washington, DC, USA.

  • Horowitz AJ (1984) The stability of stochastic equilibrium in a two-link transportation network. Transportation Research 18B: 13-28.

    Google Scholar 

  • Iida Y, Akiyama T & Uchida T (1992) Experimental analysis of dynamic route choice behavior. Transportation Research 26B: 17-32.

    Google Scholar 

  • Kass GV (1980) An exploratory technique for investigating large quantities of categorical data. Applied Statistics 29: 119-127.

    Google Scholar 

  • Mahmassani HS & Chang G (1986) Experiments with departure time dynamics of urban commuters. Transportation Research 20B: 297-320.

    Google Scholar 

  • Nakayama S, Kitamura R & Fujii S (1999) Drivers' learning and network behavior: A dynamic analysis of the driver-network system as a complex system. Transportation Research Record 1676: 30-36.

    Google Scholar 

  • Nakayama S & Kitamura R (2000) A route choice model with inductive learning. Paper presented at the 79th Annual Meeting of Transportation Research Board, Washington DC, USA.

  • Nakayama S, Kitamura R & Fujii S (2000) Drivers' route choice heuristics and network behavior: A simulation study using genetic algorithms. Paper presented at the IATBR Meetings, Gold Coast, Australia.

  • Osbay K, Dattu A & Kachroo P (2001) Modelling route choice behavior using stochastic learning automata. Paper presented at the 80th Annual Meeting of the Transportation Research Board, Washington DC, USA.

  • Polak J & Hazelton M (1998) The influence of alternative traveller learning mechanisms on the dynamics of transport systems. Transportation Planning Methods 1: 83-95.

    Google Scholar 

  • Polak J. & Oladeinde F. (2000) An empirical model of travellers' day-to-day learning in the presence of uncertain travel times. Unpublished manuscript, Imperial College of Science Technology and Medicine.

  • Quinlan JR (1993) C4.5 Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  • Sutton RS & Barton AG (1998) Reinforcement Learning: An Introduction. London: MIT Press.

    Google Scholar 

  • Timmermans HJP, Arentze TA & Joh C-H (2000) Modeling learning and evolutionary adaptation processes in activity settings: Theory and numerical simulations. Transportation Research Record 1718: 27-33.

    Google Scholar 

  • Tversky A (1972) Elimination by aspects: A theory of choice. Psychological Review 79: 281-299.

    Google Scholar 

  • Van Berkum E & Van der Mede P (1993) The Impact of Traffic Information: Dynamics in Route and Departure Choice. PhD dissertation, Delft University of Technology.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theo Arentze.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Arentze, T., Timmermans, H. Modeling learning and adaptation processes in activity-travel choice A framework and numerical experiment. Transportation 30, 37–62 (2003). https://doi.org/10.1023/A:1021290725727

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1021290725727

Navigation