ABSTRACT
We present SikuliBot, an image-based approach to automating user interface. SikuliBot extends the visual programming concept of Sikuli Script[2] from the graphical UIs to the real world of physical UIs, such as mobile devices' touch-screens and hardware buttons. The key to our approach is using a physical robot to see an interface, identify a target, and perform an action on the target using the robot's actuators. We demonstrate working examples on MakerBot 3D printer that could move a stylus to perform multi-touch gestures on touchscreen to automate tasks such as swipe-to unlock, playing a virtual piano, and playing the Angry Bird game. A wide range of automation possibilities are made viable using a simple scripting language based on images of UI components. The benefits of our approach are: generalizability, instrumentation-free, and high-level programming abstraction.
Supplemental Material
- Bolin, Michael, Matthew Webber, Philip Rha, Tom Wilson, and Robert C. Miller. 2005. Automation and customization of rendered web pages. In Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST '05). ACM, New York, NY, USA, 163--172. Google ScholarDigital Library
- Yeh, T., Tsung-Hsiang Chang, and Robert C. Miller. Sikuli: using GUI screenshots for search and automation. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST '09) . Print Platform iPhone or iPad Conveyor Bar Camera stylus Print Header. Google ScholarDigital Library
Index Terms
- SikuliBot: automating physical interface using images
Recommendations
SLAP widgets: bridging the gap between virtual and physical controls on tabletops
CHI EA '09: CHI '09 Extended Abstracts on Human Factors in Computing SystemsWe present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, transparent widgets for use on vision-based multi-touch tabletops. SLAP Widgets are cast from silicone or made of acrylic and include sliders, knobs, keyboards, and ...
Tangible bots: interaction with active tangibles in tabletop interfaces
CHI '11: Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsWe present interaction techniques for tangible tabletop interfaces that use active, motorized tangibles, what we call Tangible Bots. Tangible Bots can reflect changes in the digital model and assist users by haptic feedback, by correcting errors, by ...
PERCs Demo: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays
ITS '15: Proceedings of the 2015 International Conference on Interactive Tabletops & SurfacesTangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and ...
Comments