Event Abstract

A system for rapid development and easy sharing of accurate demonstrations for vision science.

  • 1 NTT Communication Science Laboratories, Japan
  • 2 The University of Tokyo, Intelligent Modeling Laboratory, Japan
  • 3 Freelance, Japan
  • 4 The University of Tokyo, Department of Psychology, Japan

With the rapid expansion and fusion of the vision research fields, there has been increasing emphasis on the importance of public sharing of various research resources, especially demonstrations of visual phenomena or stimuli that can be used in actual experiments. However, the amount and range of the contents currently shared in actual web-based platform systems for vision science are far from satisfactory despite prodigious efforts of several user groups. Research resources and demonstrations require precise control of the timing and quality of presented stimulus. To reduce costs of developing fine demonstrations that usually require vast amount of time and research resources, we have developed a new C++ library based on Open GL, named "Psychlops". This library enables users to develop fine demonstrations using generic consumer PC systems without any special applications. In Psychlops, the procedures for preparing to establish a connection with an operation system are completely wrapped with common, non-OS-dependent command sets. It is equipped with several preset routines for commonly used experimental methods, demonstration interfaces, and stimuli. Furthermore, the program runs on generic consumer PCs, including laptop machines. These key features of Psychlops enable users to easily make demonstrations for multiple operating systems, such as Windows and MacOSX without changing any lines in the program. Psychlops is available at our website (http://psychlops.sourceforge.jp/) for free. Using Psychlops in combination with a free integrated development environment, e.g., CodeBlocks (http://www.codeblocks.org/) or Xcode (http://developer.apple.com/technologies/tools/xcode.html), users can prepare the development environment on their own computers without any financial cost. We have also developed a branch site to complement the Psychlops demonstrations in the Visiome Platform (http://visiome.neuroinf.jp/). This Platform is a web-based database system with a variety of digital research resources in vision science and is promoted by Japan Node (J-Node, http://www.neuroinf.jp/). Numerous demonstrations uploaded to the Visiome Platform cover a broad range of visual phenomena and illusions. However, browsing users may not easily access the specific contents of each demonstration until they download it because a demonstration developed with Psychlops is an executable application. To mitigate this problem, we have developed a branch site, named "Visitope" (http://visitope.org/), where a summary of the uploaded demonstrations is displayed. At Visitope, users can browse items in a friendly manner and access the uploaded demonstrations. Besides these demonstrations, we are planning to make various other contents available at Visitope, including non-academic issues, such as introductions of workshop activities relating to vision science or design resources for web creators. These extra contents are aimed at promoting visits by non-academic people. Thus, Visitope may also work as an introduction to the Visiome Platform itself and to vision science for the general public (Figure 1). We believe that these achievements will contribute to the diversification of user groups and to the formation of a new user group where professional researchers and the general public can engage in constructive conversations. Such a new user group is expected to accelerate progress in vision science in the near future.

Conference: Neuroinformatics 2010 , Kobe, Japan, 30 Aug - 1 Sep, 2010.

Presentation Type: Oral Presentation

Topic: Infrastructural and portal services

Citation: Maruya K, Hosokawa K, Kusachi E, Nishida S, Tachibana M and Sato T (2010). A system for rapid development and easy sharing of accurate demonstrations for vision science.. Front. Neurosci. Conference Abstract: Neuroinformatics 2010 . doi: 10.3389/conf.fnins.2010.13.00093

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 14 Jun 2010; Published Online: 14 Jun 2010.

* Correspondence: Kazushi Maruya, NTT Communication Science Laboratories, Kanagawa, Japan, kazushi.maruya@gmail.com