Skip to main content

Deep Tree Transductions - A Short Survey

  • Conference paper
  • First Online:

Part of the book series: Proceedings of the International Neural Networks Society ((INNS,volume 1))

Abstract

The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions. It provides a discussion of modern TreeLSTM models, showing the effect of the bias induced by the direction of tree processing. An empirical analysis is performed on real-world benchmarks, highlighting how there is no single model adequate to effectively approach all transduction problems.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Aiolli, F., Da San Martino, G., Sperduti, A.: Extending tree kernels with topological information (2011)

    Google Scholar 

  2. Alvarez-Melis, D., Jaakkola, T.: Tree structured decoding with doubly recurrent neural networks. In: ICLR 2017 (2017)

    Google Scholar 

  3. Bacciu, D.: Hidden tree Markov networks: Deep and wide learning for structured data. In: IEEE-SSCI 2017, pp. 1–8 (2017)

    Google Scholar 

  4. Bacciu, D., Micheli, A., Sperduti, A.: Compositional generative mapping for tree-structured data; part I: bottom-up probabilistic modeling of trees. IEEE Trans. Neural Netw. Learn. Syst. 23(12), 1987–2002 (2012)

    Article  Google Scholar 

  5. Bacciu, D., Micheli, A., Sperduti, A.: Generative kernels for tree-structured data. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4932–4946 (2018)

    Article  MathSciNet  Google Scholar 

  6. Bacciu, D., Bruno, A.: Text summarization as tree transduction by top-down TreeLSTM. In: Proceedings of IEEE SSCI 2018 (2018)

    Google Scholar 

  7. Bacciu, D., Micheli, A., Sperduti, A.: An input-output hidden Markov model for tree transductions. Neurocomputing 112(suppl. C), 34–46 (2013)

    Article  Google Scholar 

  8. Bangalore, S., Rambow, O., Whittaker, S.: Evaluation metrics for generation. In: Proceedings of the First INLG, vol. 14, pp. 1–8 (2000)

    Google Scholar 

  9. Clarke, J.: Global inference for sentence compression : an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)

    Article  Google Scholar 

  10. Cohn, T., Lapata, M.: Sentence compression as tree transduction. J. Artif. Intell. Res. 34(1), 637–674 (2009)

    Article  Google Scholar 

  11. Denoyer, L., Gallinari, P.: Report on the XML mining track at INEX 2005 and INEX 2006: categorization and clustering of XML documents. In: SIGIR 2007, vol. 41, pp. 79–90 (2007)

    Google Scholar 

  12. Diligenti, M., Frasconi, P., Gori, M.: Hidden tree Markov models for document image classification. IEEE TPAMI 25(4), 519–523 (2003)

    Article  Google Scholar 

  13. Frasconi, P., Gori, M., Sperduti, A.: A general framework for adaptive processing of data structures. IEEE Trans. Neural Networks 9(5), 768–786 (1998)

    Article  Google Scholar 

  14. Gallicchio, C., Micheli, A.: Tree echo state networks. Neurocomputing 101, 319–337 (2013)

    Article  Google Scholar 

  15. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  16. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR 2015 (2015)

    Google Scholar 

  17. Klein, D., Manning, C.D.: Fast exact inference with a factored model for natural language parsing. In: Advances in Neural Information Processing Systems, vol. 15, pp. 3–10 (2003)

    Google Scholar 

  18. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of Workshop at ICLR (2013)

    Google Scholar 

  19. Sakti, S., Ilham, F., Neubig, G., Toda, T., Purwarianti, A., Nakamura, S.: Incremental sentence compression using LSTM recurrent networks. In: 2015 IEEE Workshop on ASRU, pp. 252–258 (2015)

    Google Scholar 

  20. Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. In: Proceedings of the 53rd Annual Metting on ACL and the 7th IJCNLP, pp. 1556–1566 (2015)

    Google Scholar 

  21. Zhang, X., Lu, L., Lapata, M.: Tree recurrent neural networks with application to language modeling. In: Proceedings of NAACL-HLT 2016, pp. 310–320 (2016)

    Google Scholar 

Download references

Acknowledgment

This work has been supported by the Italian Ministry of Education, University, and Research (MIUR) under project SIR 2014 LIST-IT (grant no. RBSI14STDE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davide Bacciu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bacciu, D., Bruno, A. (2020). Deep Tree Transductions - A Short Survey. In: Oneto, L., Navarin, N., Sperduti, A., Anguita, D. (eds) Recent Advances in Big Data and Deep Learning. INNSBDDL 2019. Proceedings of the International Neural Networks Society, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-030-16841-4_25

Download citation

Publish with us

Policies and ethics