April 15 2011
By Sebastiaan Mathôt 4211 reads
- What is Mantra?
- OpenSesame plug-in
- Demonstration video
- Known issues
- Using Mantra from a script (no GUI)
- Feedback and support
Mantra allows you to use object/ movement tracking as a method of response collection in your psychological experiments. Mantra is compatible with popular software for creating experiments, such as OpenSesame, E-Prime, PsychoPy and PyEPL. You can use Mantra with general purpose hardware. Essentially, all you need is a webcam and a computer.
For more information, see the documentation. A peer-reviewed publication is available, which describes several experiments and also provides some accuracy benchmarks. A detailed manual provides a more technical description.
Mantra is free software
Mantra can be installed from source, using an automated, distribution-independent installer. For installation instructions, see the documentation or 'doc/mantra-doc.odt' (included in the archive).
These experiments demonstrate how Mantra can be used in combination with E-Prime and Python. They can also be used as templates for creating new experiments. The data that we obtained is included.
- E-Prime experiment (the Mueller-Lyer illusion, Exp. 1 from the paper)
- Python experiment (additional singleton, Exp. 2 from the paper)
- OpenSesame experiment (Spatial accuracy test, Exp. 4 from the paper)
Examples that require the OpenSesame Mantra plug-in (recommended):
Libraries for using Mantra in your experiments
In order to use Mantra from within E-Prime some code must be placed in the User Scripts section of your experiment. In order to use Mantra from within Python a library must be imported. Please refer to the example experiments, which include all necessary code, for a demonstration.
- E-Basic (E-Prime user script, required for use in E-Prime)
- Libmantra.py (Python library required for use with Python) [Documentation]
Full documentation is available as well as a non-technical paper describing Mantra.
- Documentation [PDF] [HTML] [ODT]
- Paper in Behavior Research Methods, describing several experiments and benchmark tests
The paper describes four experiments. Experiments 1 and 2 show that Mantra is a viable tool in a realistic experimental setting. Experiments 3 and 4 provide a more quantitative test of Mantra's accuracy/ precision. For details, please refer to the paper, but basically it is feasible to get a spatial precision of up to 0.3°, which will correspond to a about 2mm in most set-ups.
We would, of course, appreciate it if you cite us when you have used Mantra:
Mathot, S., & Theeuwes, J. (in press). Mantra: An Open Method for Object and Movement Tracking. Behavior Research Methods.
- Under recent versions of Ubuntu, a segmentation fault sometimes occurs. Please refer to this forum post.
- After you have started recording, the network protocol (TCP or UDP) can only be changed by restarting Mantra.
- If you have installed Mantra once and for some reason it failed (e.g., because you installed the .deb on Ubuntu 10.10) you may have to remove the "/usr/local/lib/python2.6/dist-packages/mantra" folder to force a complete reinstallation.
- The distribution independent installer does not create an entry in the applications menu. You can manually add an entry or start Mantra by running "qtmantra" (e.g., from a terminal)
The following code snippet shows how you can track objects using Mantra from within a Python script (circumventing the GUI).
#!/usr/bin/env python #-*- coding:utf-8 -*- from mantra import camera # Import Mantra camera modules res = 320, 240 # Camera resolution dev = "/dev/video0" # Camera device col = 135, 62, 69 # RGB color of the tracked object fuzz = 50 # Color fuzziness (lower is pickier, higher is more liberal) # Initialize the camera camera.camera_init(dev, res, res) # Track for ten frames for i in range(10): # Capture a frame camera.camera_capture() # Track the object in the frame # Explanation of parameters: camera.track_object([red], [green], [blue], # [fuzz], [predicted x], [predicted y], [highlight object in frame]) camera.track_object(col, col, col, fuzz,\ camera.cvar.track_x, camera.cvar.track_y, 0) # Print the coordinates print camera.cvar.track_x, camera.cvar.track_y # Neatly close the camera camera.camera_close()
I will gladly answer any questions on the forum.